What's (are) the funniest/stupidest way(s) you've broken your linux setup?

fl42v@lemmy.ml to Linux@lemmy.ml – 307 points –

Tinkering is all fun and games, until it's 4 am, your vision is blurry, and thinking straight becomes a non-option, or perhaps you just get overly confident, type something and press enter before considering the consequences of the command you're about to execute... And then all you have is a kernel panic and one thought bouncing in your head: "damn, what did I expect to happen?".

Off the top of my head I remember 2 of those. Both happened a while ago, so I don't remember all the details, unfortunately.

For the warmup, removing PAM. I was trying to convert my artix install to a regular arch without reinstalling everything. Should be kinda simple: change repos, install systemd, uninstall dinit and it's units, profit. Yet after doing just that I was left with some PAM errors... So, I Rdd-ed libpam instead of just using --overwrite. Needless to say, I had to search for live usb yet again.

And the one at least I find quite funny. After about a year of using arch I was considering myself a confident enough user, and it so happened that I wanted to install smth that was packaged for debian. A reasonable person would, perhaps, write a pkgbuild that would unpack the .deb and install it's contents properly along with all the necessary dependencies. But not me, I installed dpkg. The package refused to either work or install complaining that the version of glibc was incorrect... So, I installed glibc from Debian's repos. After a few seconds my poor PC probably spent staring in disbelief at the sheer stupidity of the meatbag behind the keyboard, I was met with a reboot, a kernel panic, and a need to find another PC to flash an archiso to a flash drive ('cause ofc I didn't have one at the time).

Anyways, what are your stories?

244

source ~/.bash_history

That's the scariest horror story in 2 words I've seen so far

I'm genuinely having a chuckle at how shocked people are at my submission, made my day xD

I mean, it's simple, elegant, and destructive AF given the right circumstances. Basically a chaos grenade we didn't realize existed

Can a linux noob get an explanation of this?

source is a bash shell built-in command that executes the content of the file passed as argument, in the current shell.

~/.bash_history contains all the commands you ever executed in bash (the default shell in most Linux systems)

To add on to this explanation, you generally use source ~/.bashrc to reload your shell whenever you want to make changes to your user config. Tab completion weakens the barrier to destruction significantly (esp. in my case)

Until you use a system that doesn't have a ~/.bashrc , and now your tab completion helpfully expands "~/.ba[TAB]" to "~/.bash_history" .

i never thought i had a nuke that i can launch using one command

Jesus Christ. It would be a good idea to format that file to have an exit as first line to avoid this

Many many years ago I wanted to clean up my freshly installed Slackware system by removing old files.

find / -mtime +30 -exec rm -f {};

Bad idea.

Not me, but one I saw... dude used chmod to lock down permissions across the board... including root... including the chmod command.

"What do I do?"

🤔

"Re-install?"

Yeah, a very unfortunate one: probably, the most painful to recover from. I'd just reinstall, honesty 😅 At least with mine I could simply add the necessary stuff from chroot or pacstrap and not spend a metric ton of time tracking all the files with incorrect permissions

There's got to be other tools though that could change the file permissions on chmod, right? Though I suppose you'd need permission to use them and/or download them.

You can dump the permissions from the working system and restore them. Quite useful when working with archives that don't support those attributes or when you run random stuff from the web 😁

Many distros offer a automated file/directory ownership restore feature on their liveOS

I managed to do that back when I was new. Luckily it was a fresh install, so I didn't lose much when I had to reinstall.

So far, that has been the only time I really screwed something up outside of a virtual machine.

@jordanlund @fl42v I *think* this one could be recoverable if they had a terminal still active by using the dynamic loader to call chmod — or by booting from a liveCD and chmodding from there.

That'd likely get you to a 'working' state quickly, but it'd take forever to get back to a 'sane' state with correct permissions on everything.

Exactly. There's no way to even know what the previous permissions were for everything.

They were TRYING to recursively change permissions in a single directory, accidentally hit the whole system. :(

Tried to convert Ubuntu to Debian by replacing the repos in sources.list and apt dist-upgrading. 💣 Teenagers...

I thought about trying something like this the other day and quickly reconsidered

I'm as nerdy as they come, but... I don't think you did teenage rebellion right.

One that I can remember many years ago, classic trying to do something on a flash drive and dd my main hdd instead.

Funny thing, since this was a 5400rpm and noticed relatively quick (say 1-2 minutes), I could ctrl-c the dd, make a backup of most of my personal files (being very careful not to reboot) and after that I could safely reformat and reinstall.

To this day it amazes me how linux managed to not crash with a half broken root file system (I mean, sure, things were crashing right and left, but given the situation, having enough to back up most things was like magic)

Many years ago I was dual booting Linux and Windows XP. I was having issues with the Linux install, and decided to just reinstall. It wasn't giving me the option to reinstall fresh, only to modify the existing install.

So I had the bright idea to just rm -rf /

Surely it'll let me do a fresh Linux install then.

Immediately after hitting enter I realized that my Windows partitions would be mounted. I did clearly the only sensible thing and pulled the plug.

I think I recovered all of my files. Kind of. I only lost all the file paths and file names. There was plenty to recover if I just sorted though 00000000.file, 00000001.file, 00000002.file, etc. Was 00000004.file going to be a Word document or a binary from system32 directory? Your guess is as good as mine!

First, the classical typo in a bash script:

set FOLDER=/some/folder

rm -rf ${FODLER}/

which is why I like to add a set -u at the begining of a script.

The second one is not with a Linux box but a mainframe running AIX:

If on Linux killall java kills all java processes, on AIX it just ignore the arguments and kill all processes that the user can kill. Adios the CICS region 😬 (on the test env. thankfully)

If on Linux killall java kills all java processes, on AIX it just ignore the arguments and kill all processes that the user can kill.

jfc, is ignoring arguments the intended behavior?

On a real UNIX (not only AiX) killall is part of the shutdown process - it gets called by init at that stage when you want to kill everything left before reboot/shutdown.

Linux is pretty unique in using that for something else.

I didn't know that, good to know.

They could have send a SIGTERM by default instead of a SIGKILL. I would not have corrupt everything 😅

killall typically sends SIGTERM by default. It accepts a single argument, the signal to send - so shutdown would call it once with SIGTERM, then with SIGKILL. killall is not meant to to be called interactively - which worked fine, until people who had their first contact with UNIX like systems on Linux started getting access to traditional UNIX systems.

It used to be common to discourage new Linux users from using killall interactively for exactly that reason. Just checked, there's even a warning about that in the killall manpage on Linux.

Wow, the last one is quite unexpected. What a useful command

after reading what "set -u" does, bro this should be default behavior, wtf?

sudo rm -f /lib /usr/share/backup/blah blah.tar.gz

Note the space.

Might be recoverable if you had a live distro ready. Otherwise, o7.

Oh no, this was back in the days when we loaded our distros by way of a stack of floppy disks.

Top tip, if tired, replace the rm -f part of the command with something innocuous for a first run. Actually, is better to do this mistake once so that the two important lessons are learned... Backup (obviously, in your case it was backups, but the point still stands) and double check your command if it has potential for destruction 👍

1 more...

I can't even remember how I did this, but overwriting the partition table on the main production server at our small startup (back when "the server" would usually live on the premises of the startup). I remember my boss starting to hyperventilate from panic while I reconstructed it from memory / notes, and all the filesystems came back and he calmed down.

Same job, they gave me a little embedded-systems unit for me to use to build a prototype on. I hooked it up, nothing worked. I brought it back to them.

Hey, this one doesn't work.

Huh... that's weird, it was working before. Did you break it?

I don't think so. Can I have one that works?

They literally told me, as they were handing me the second one: Okay, here's another one. Don't break it.

I figured it out literally seconds after breaking the second one... I was hooking it up to 12 volts of power when it needed 5. Second dead computer. Explaining that and that I needed a third one now was fun.

Had something similar several years ago: prototype stopped working for some reason, one of the hardware engineers and I were troubleshooting on a second prototype, and we exploded a large capacitor... The rest of the team were not amused that we destroyed 2 out of 3 working prototypes within 10 minutes.

I thoroughly backup up my slow nvme before installing a new faster one. I actually didn't even want to reuse the installation, just the files at /home.

So I mounted it at /mnt/backupnvme0n1, 2, etc and rsynced

The first few dry runs showed a lot of data was redundant, so I geniously thought "wow I should delete some of these". And that's when I did a classic sudo rm -rf in the /mnt root folder instead of /mnt/dirthathadthoseredundantfiles

Tinkering is all fun and games, until it’s 4 am, your vision is blurry, and thinking straight becomes a non-option, or perhaps you just get overly confident, type something and press enter before considering the consequences of the command you’re about to execute… And then all you have is a kernel panic and one thought bouncing in your head: “damn, what did I expect to happen?”.

Nah, that's when the fun really starts! ;)

The package refused to either work or install complaining that the version of glibc was incorrect… So, I installed glibc from Debian’s repos.

:D That one is a classic. Most distributions don't include packagers from other distros because 99% of the time it's a bad idea. But with Arch you can do whatever you want, of course

My two things:

  • I've heard about some new coreutils (rm, cp, cat... this time the name really fits the contents :D) and I decided to test it out. Of course it was conflicting with my current coreutils package and I couldn't just replace it because deleting the old package would break requirements. So without thinking I forced the package manager to delete it "I'll install a new one in just a second". Turns out it's hard to install a package without cp, etc :D
  • I don't remember what I was doing but I overwrote the first bytes of hdd. Meaning my partition table disappeared. Nothing could be mounted, no partitions found. Seemingly a brick.
    Turns out, if you run a rescue iso, ask it to try and recognize partitions and recreate the table without formatting, Linux will come back to life as if nothing happened

Nah, that's when the fun really starts! ;)

Well, on the upside, it definitely works better than coffee or energy drinks :D

Also, nice save with the last one!

Funny, that's when I give up for the night and go to sleep

Not quite catastrophic but:

I'm in the process of switching my main server over from windows to Linux

I went with Deb 12 and it all works smoothly but I don't have enough room to back up data to change the drive formats so they're still NTFS. I was looking at my main media HDD and thought "oh, I'll at least delete those windows partitions and leave the main partition intact."

I found out the hard way that NTFS partitions can't just reclaim space like that. It shuffles all the data when you change the partition. It's currently 23 hours into the job and it's 33% done.

I did this to reclaim 30 MB of space on a 14 TB drive.

You mean you've removed the service partitions used by windows and grown the main one into the freed space? Than yes, it's not the way. 'Cause creating a new partition instead of growing the existing one shouldn't have touched the latter at all :/

Yes, I grew the existing one. Lesson learned I guess. 30.5 hours into it and it's at 41%.

CTRL-C-ing apt because it looked stuck for more than 10 minutes. I don't recommend doing it.

Haven't used apt in a while, is it not atomic? What happens if you mess with it?

I don't think it is, if it doesn't run its course on its own, you're screwed. It's Debian so you can recover, but, at least for me, it was painful.

Before installing Arch on a USB flash drive, I disabled ext4 journaling in order to reduce disk reads and writes, being fully aware of the implications (file corruption after unexpected power loss). I was confident that I would never have to pull the plug or the drive without issuing a normal shutdown first. Unfortunately, there was one possibility I hadn't considered: sometimes, there's that one service preventing your PC from turning off, and at that stage there's no way to kill it (besides waiting for systemd to time out, but I was impatient).

So I pulled the plug. The system booted fine, but was missing some binaries. Unfortunately, I couldn't use pacman to restore them because some of the files it relied on were also destroyed.

This was not the last time I went through this. Luckily I've learned my lesson by now

First time trying Linux I went with an arch install because I Googled "best version of Linux" and went with arch. Followed a guide to the point of drive formatting and I decided to go with a setup with drive encryption. I didn't understand what I was doing, ended up locking myself out of my hard drives and couldn't get windows to reinstall on them. I used a MacBook for a week until I installed Ubuntu and managed to wipe and reset my drives and reinstalled. Needless to say I am going to read up a little more before I try that again.

Been there, and even without encryption: took me to reinstall a few times before I realized I can chroot again and repair 😅

Archinstall python script is your friend 😄😉 I tried install arch manually, but as I learned that not even sudo is included in the Linux essential packages, I stopped the process and went back to aromatic script install, lol, got no time for that S*** 😂

fstab bind mount for /home that I misspelled, so I couldn't login as myself.

fstab external hdd mount that didn't have ignore flag so PC would pop if I booted while unplugged

Accidentally booting windows after a year and it overwrite my EFI boot entry.

The best I've see however was an acquaintance who accidentally set perms to own user on /usr/bin

So everything went from root:root to user:user which removed all the SUID/SGID bits as well so a bunch of bins broke lol.

Believe it or not, it was actually fairly easy to fix with chmod and chown

Had the issue with the ignore flag missing literally a week ago. I mounted the HDD to troubleshoot it, ended up kicking the bucket, couldn't read any partitions from it anymore, but I had the Partuuid in fstab. Had to plug the main drive of the system into another one to fix fstab....

1 more...

Found out the hard way that if you edit /etc/sudoers with anything other than visudo you best be absolutely sure the syntax is correct, otherwise sudo will refuse to read it and you'll be locked out.

Also learned to add -rf to the rm command at the end, after I re-read it to make sure it does what it should do. Something like rm /path -rf instead of rm -fr /path. That protects you from your fat fingers hitting the enter key half way through.

Been there with sudo. Fortunately, su still works, as well as going to another tty and logging in as root. Well, as long as the root login is enabled; otherwise that old hack with init=/bin/bash may work, unless you've prohibited editing kernel cmdline in the boitloader or decided on efistub

IIRC the root account was disabled (with no password), so I resorted to my trusty SystemRescueCD pen to fix things. Never leave home without it.

stupid was when I wanted to test Linux Mint on an external SSD, and didn't check that the bootloader wasn't going to overwrite my internal drive's.

So anyway I'm running Linux Mint now.

That's an interesting way to distro hop for sure

It's a fine distribution. I have it on my desktop and at least one laptop. But yes, a weird way to decide to distro hop 🤣

I was on Manjaro, and I didn't want to put the effort in for a third time just to break it again. While I prefer arch based distros, I've been liking Mint since I can almost use it without a terminal like manjaro.

I uninstalled Python.

I was playing around with Pygame of all things, and it wasn't behaving as the (apparently out of date) documentation was saying it should, so I figured I'd just uninstall and reinstall Python.

EVERYTHING borked. APT wouldn't even work.

Oh that's a good one. It feels like it should be doable and then.... BAM

Ha! Came to say this too!

I tried to uninstall Python because I was just trying to minimize junk on my computer and I usually code in Bash, Node or Java.

Years ago I was dual-booting with Ubuntu just to try out whatever this Linux thing was that all the nerds were talking about. Liked it and played around with it, but for whatever reason I wanted to go back to just Windows, I needed the space I had partitioned off or something, can’t remember why. So I just uninstalled or deleted the bootloader somehow (maybe I just deleted the Linux partition and expected the space to clear up like normal).

Go to restart the computer… oh shit. Ohshotohshitohshitohshit.

I've done my plenty of stupid stuff, from dd disks I was using to forcefully uninstall dependencies of the package manager. But the one that takes the cake for me happened back in 2012, I was working at a research lab in the university and was sharing a computer with another intern. That other intern used Gentoo and so we agreed that the machine should be Gentoo, I've installed it at my house on my PC and got comfortable with it before we shared that computer. One thing that I learnt when installing Gentoo is that the /dev folder is created on boot, you don't populate it when installing, instead you mount the one from the host system you're using to install.

The computer had an issue with a device, can't even remember what it was, so I thought I'll run rm -rf /dev that should take care of the issue and after a reboot it will be repopulated... It might have worked, but what I actually ran was rm -rf /etc.

Deleted my entire efi partition while trying to install some grub themes.

And then my backup didn't work when I tried to restore it.

I have pretty colours now though, so it was all worth it :)

Been there, done that. But I haven't had any problems once I switched to systemd-boot 🤷

Just straight up overwriting boot sector and superblock of my hard drive thinking it's the USB drive.

Udev tried to warn me, saying there's no permission, and I just typed sudo without thinking.

Then after a second I remembered USB block devices are usually writable by users, but it was too late.

USB block devices containing mountable filesystems (on Desktop systems) can generally have those filesystems mounted and files written to them by regular users; But the block device itself stays only root writeable.

So, you need root privileges either way.

(Going from memory, but also decently confident)

I don't know if that counts, but on a fresh default Debian Stable system, my cat walked across the keyboard and the DE crashed.
I could still switch to another TTY and reboot via command line.
After the reboot, I was greeted by a blinking cursor and nothing else. Had to reinstall.

Cat killed grafic card driver?

My own classic was fiddling with the nvidia PRIME config to try and get rid of some very mildly irritating screen tearing. No graphics output at all. Now this is fixable of course, but it's a pig.

And I'd decided to do this 2 hours before an incredibly important progress review meeting for my PhD.

Got it back with about 10 mins to spare and decided just to leave the driver config alone after that.

Bonus round

Also a friend managed to bork his ubuntu 16 laptop by trying to switch from unity to gnome and ending up with sort of neither. That was reinstall territory right there.

I wanted to use fio to benchmark my root drive. I had seen a tutorial saying that the file= parameter should point to the device file, so I pointed it at /dev/sda. As you might expect, the write test didn't go so well.

Backed up the whole disk image to an external drive because I didn't have time for a proper backup but knew I would need some of those files later.

Installed a fresh new OS on the same disk, used it for a couple of months.

Needed to make some space on the external drive I had the backup on so I'll just delete the backed up system files from it.

cd /mnt/external_drive

rm - r /usr /boot ...

As you can probably see, a fresh new install was happening again

Took me a solid second to get it as well.

Learned about the importance of trailing slashes in rsync by using the -delete flag.

Somehow convinced a person to run sudo chmod -x /usr/bin/*

I don't remember the exact command so it could be a bit different but it did the job. It was a fun evening.

The first time I read this, i thought "shouldn't all that be executable anyway?"

And then I read it again and realized, minus x

An intern nuked their workstation by sudo chmod -R 777 /. Turns out adding exec to everything isn't good either.

Daaem, I guess the poor dude at the receiving end did not consider it particularly fun. Well, at least they had sbin working, so probably possible to recover without a live cd. Huh, guess who's now spinning up a VM to check it out 🤣

Checked it out: on arch that results in inability to run tty on reboot, then you're dropped into initramfs's rescue shell where you can simply +x new_root's /usr/bin/* and be back up and running

I have a stupid one, but far from funny.. I've been using and building computers for a very long time so I'm far from a noob, but I'm still quite cautious, bordering on paranoid, so I like to unplug all other drives when re/installing an OS just to avoid stupid mistakes. I go through the installer on the livecd, there's only one drive to choose from so I don't think much about it, select that it should erase everything, I set up the new partition structure, and start the process. After about a minute I begin wondering "why is it taking so long?", and "what is that ticking noise? SSDs shouldn't be making any sounds when written to", when I realize that I had unplugged the wrong drives and that I was currently overwriting my main storage drive. Of course I had backups of the most important things like photos and code, though not really synced for a couple of months so I lost some stuff permanently.

I can totally feel that sudden clot in the gut the moment you realize on which drive the action is happening, just by reading this.

This was pre-linux for me but something you can still do in most distros so I think it's a valid story.

In 1999 I was using Napster on computer running MS-DOS. I was 12 years old and an aspiring open media enthusiast/stupid script kiddie. I was using the file explorer interface in Napster and accidentally gave access to my entire C drive. I also had opened ports to share certain media and to fuck with my friends using daemon tools (back then you could do stupid stuff like control a friend's desktop with certain versions of daemon tools). Immediately I started receiving packages called things like "sleep.tight.tiny.mite" and I knew I was fucked so I clicked in the Napster interface and clicked "delete" and deleted my entire active drive.

I panicked and installed the only operating system we had which was a random copy of Red Hat. When my dad came home I pretended like it had always had Linux on it. I do think he was more impressed than mad.

"Just pretend it's always been Linux" is a bold move. I salute 12-year-old you o7

I've broken systems far too many times in the last 24 years, since Mandrake 6.x, to count:

  • I've dd a disk or more
  • I've rm *
  • I've chmod
  • I've brought down the network, with every intention tar it would come back - on a remote box
  • I've failed to RTFM far too many times

Generated my grub configuration as grub.conf

This one took a stupid amount of time to debug - but on the other hand, when grub failed it did with "can't find any bootable thingy" and not "missing configuration file" as, in my later opinion, it should.

Life Linux is a harsh mistresses, sometimes.

Connect via ssh to my home server from work

Using a cli torrent client to download stuff

Decide I need a VPN.

Install VPN again from CLI

Run VPN which disconnects my ssh connection

Even when I get home, the server is headless so I have to locate a keyboard and mouse before I can fix.

Dang, similar stuff happened to me on nixos. Had to instruct one of the relatives on how to reboot the machine and choose a previous generation in the boitloader 🤣

Not the installation strictly speaking, but my most "funny" fuckup was setting up xfree86. There was a configuration for crt monitor scan frequency that you had to setup. I messed up something and the monitor started to squeel like crazy and quickly hit hard reset in panic.

The monitor didn't die, but it had a slight high pitch noise to it after.

Back then I was testing modelines to see the maximum I could push to my 14" monitor. I then backed it with a 1200x1600 virtual screen.

My girlfriend got sick from watching me scrolling around and bought me a 19" display which could do that resolution - and ended up frustrated when I added a larger virtual screen.

A 19" monitor was quite big for the day, and expensive! I hope your gf didn't beat you up too much for that :)

I know little about crt because I was born in 2000. Can you explain why did the monitor started to make scary sounds?

I know that crt monitors didn't have any method to report the supported frequency, aside from more recent models, correct?

Yeah, monitors were somewhat dumb, just received and did what the vga output asked to do.

The noise most likely came from the semiconductors that controlled the magnet field that directed the rays onto the screen. These components are selected for a specific speed that the monitor can handle. So going under or over it's spec can make something resonate in the audible range, and could even destroy the components if stressed too much.

The thing is that for each resolution and refresh rate you had two values to configure, one for the vertical speed in Hz, and horizontal speed in kHz. These values were usually specified in the owners manual. Typos can happen, and this was quite a risky operation.

I copied a program into the /bin/ folder while in a file browser with sudo permissions and somehow overwrote every file except the one I was moving. It, of course, couldn't boot, but copying the bins from a live iso made it at least boot able. Reinstalled Linux after that, of course.

Oh, I just remembered another one or three. So, resizing the partitions. My install at the time had a swap partition that I didn't need anymore. Should be simple, right? Remove the partition and the corresponding fstab entry, resize root, profit. Well, the superblock disagreed. Fortunately, I was lucky enough to be able to re-create the scheme as it was, and then take my time to read the wiki and do the procedure properly (e2fsck, resize2fs and all that stuff).

Some people I've met since, unfortunately, weren't so lucky (as far as I remember, both tried to shrink and were past mkfs already) and had to reinstall. The moral is, one does simply mess with superblocks; read the wiki first!

Nooo I have so many.. This one I can explain in English:

Xubuntu but blind

So, this is ~2016. Ubuntu is hip and a handful of my students use it. On my PCs I only use Debian and Suse. So to help them better I take out an old ASUS laptop and install Ubuntu on it. Try out Xubuntu instead.

At that time I was also huge into alternative keyboard layouts. I had a slightly modified Neo keyboard layout installed when I switched to Xubuntu.

Here the fun starts because the obscure internal graphics card built into the laptop didn't have driver support under Xubuntu. Black screen but I could hear it working. This was the hardest driver fix I ever did. No monitor and a keyboard layout I wasn't used to, under a Linux distro I wasn't used to. And I also was at the university library, so no hardware support or Debian stick in reach.

3 more...

I once deleted the network system in alpine. I'd been having some trouble with with the default one (I think wpa_supplicant) so I decided to try the other one (I think iwctl). But I thought that there might be problems with havung both of them so before I installed iwctl I deleted wpa_supplicant (thinking that it was more of a config utility than the whole network system), only to find that I couldn't connect to the internet to install iwctl.

You could have used the netplan yaml file 😇 or chroot from live distro

On OpenSUSE, in Yast bootloader tool, there is a checkbox to to do something like locking the bootloader (it has been a while, I don't remember the exact thingy). Rebooted and oh, surprise, the bootloader was locked... Which mean Grub didn't load.

I had to reinstall the whole OS 🤣

Not strictly Linux related, but in college I was an IT assistant. One day I was given a stack of drives to run through dariks boot and nuke.

I don't remember exactly what happened, but I think midway through, my laptop shut off.

Guess who picked the wrong drive to wipe with DBAN :)

Oh, i have a brilliant one:

A few years ago i spent a lot of time converting .flac-files into .ogg-files in order to put on my oldschool iPod. As I did a lot of repetitive typing - entering $dir / for file in flac ; do convert etc / mkdir -p $somewhere/$artist/$album / mv $somewhere/.ogg->$new_dir/ and so on - I thought: "hm lets just write a loop over loops for all the artists here and then all the albums and at the same time create the nested directories somewhere else... hm actually in the home directory.... and later love everything on the iPod at once."

so i was in my music folder with the artists-folders i wanted to convert. i did something wrong

So i did my complicated script directly in the shell. I made something wrong and instead of creating a folder "~/artist/album" I created 3 folders in my current working directory: "~", "artist" and "album". hmph dammit gotta try again... but first : i have to clean up these useless folders in the current dir. so i type of course this: "$ rm -r ~ artist album " after about 5 seconds of wondering why it took so long i realized my error. o_O I stopped the running command, but it was (of course) too late and i bricked my current installation. All the half-deleted config files made or impossible to start normally and extremely tedious to repair it by hand, so i reinstalled.

Let's see: Unintentionally making a proxy accessible to anyone online

Accidentally deallocating an ext4 partition and then having to run testdisk on it

Trying to manually create a grub entry and corrupting the bootloader

Installing a arch derivertive and having it silently overwrite grub

Installing puppy Linux and then trying to get it to use apt

Incorrect use of ppa's on mint resulting in very old packages being installed

And many others besides

The first time I enabled o auth for something self hosted, I gave access to anyone with a Gmail account.

I set up 2FA via a hardware security key (a yubikey) for login, sudo etc. I then tried to switch security keys, removing the old pam files and adding a new one. But I didn't tidy the pam files up before logging in, and there was effectively no way to log in, since editing the pam files required sudo access to edit in the first place. So basically the whole system required access to a pluggable authentication module that it no longer had any ability to recognize. It was honestly pretty funny. I did manage to recover my data by booting from a live system and decrypting my drive from there.

I've also accidentally removed my desktop environment twice while trying to update Python versions and then cleaning up old packages, but that's kinda not that big deal and is just a facepalm moment.

Types

rm -r -f Presses strg+v (instead of strg + shift + v)

Hits enter

Maschine proceeds to delete the home folder as the garbage that comes when pressing normal strg+v gets interpreted so...

Writing and running a script to delete the first 2 characters from all files and folders recursively.

It started backtracking to my home folder. :/

at's a funny story, hope you got everything backed up

sudo apt autoremove

Who ever made this shit and then decided to always show you this message that you should do it. What a dick

Installed python3 before it was made the native python on the dist. Half broke everything, including apt & python. So I uninstalled it, and then everything was broken. Finally got python3 reinstalled, and lived with it kindof working & awful distribution updates.

I have finally freed myself of that prison last month, by nuking everything and starting fresh.

You can have both python 2 and 3 on the system. It just depends upon which is the default as to how much you break it 👍 The symlink to /usr/bin/python is the important bit for most software. For deb-based at least, update-alternative is your friend.

I'll happily say I must have overlooked something, but I did try using update-alternatives. I don't remember all the nuts and bolts from the start, but it involves python3 and distribution upgrades. I spent a good number of nights over the years trying to unmess it up, and am happy to never think about it ever again.

There was definitely a time when python3 was not recommended and plenty of scripts weren't yet differentiating between the two. Everything was breaking back then.

Linux Mint: removed all taskbars from the desktop. I was hoping it would just allow me to reset them to the default. But in reality, it breaks the GUI and it's very hard to reset from the GUI. Suddenly my keystrokes weren't being detected and I couldn't open up applications with any sort of regularity. After a lot of dicking around, I got the terminal working so I could reset Cinnamon.

It's not the worst way I've broken a machine. But it was one of the most annoying.

One thing I learnt a while back is that if you break your GUI you can always use Ctrl+Alt+F<1-9> to go to different terminals to try to solve it. Worst case scenario I would do something like mv .config .config.bkp and sudo systemctl restart that should hopefully get you back to default settings on the UI.

Source: been there, done that. Not exactly your error but similar enough.

Always remember, there is always a "terminal" accessible: Don’t forget poor tty

I didnt break anything, but there was this one time i was setting up a new lxc container i had just spun up. I installed nginx, and a bunch of other packages, started writing new config files.... Then i noticed my prompt was user@desktop$ instead of user@server$

Whoops... I was in the wrong terminal window, typing commands into my desktop instead of the container i was setting up.

Don't get me started.

There are good reasons why I have personal "production system" to do my work with.

dpkg-reconfigure sysvinit

I don't remember what I was trying to achieve, but it was a bad idea. I also didn't (and still don't) know how to fix the outcome of this, so - since my home was on a separate partition anyway - I just reinstalled Debian since that was much quicker anyway.

"Updating" a 5.2 RedHat install with a 6.0 Mandrake CD-ROM (or the opposite, can't remember right now...). Fun stuff.

Debian sid a few years ago: Uninstalled Python2, system became unusable and couldn't neither reinstall from APT neither recompile it

Did it on Ubuntu and nothing worked anymore. Somehow managed to recover.

I was much more inexperienced in Linux at the time, I could probably fix it now if the same thing happened again.

So I am sort of an embedded developer, and I like to mess around with weird configurations. So the craziest experiment I did was trying to reflash a rasberry pi from a system running in the pi's RAM. It honestly might have worked, but during the prep work I forgot to resize the filesystem before mucking with the paritions and had to reflash the normal way before I could try again. Ended up just turning it into a pihole instead, but I still learned a lot about pivot_root

For me, it was a simple enabling of AUR im manjaro, twice Now I use arch, lol.

Years ago a friend mistakenly typed in killall5 as root on a remote server. Didn't break things but resulted in extra work and effort.

When Ubuntu 16.04 had just been released, I tried upgrading my 14.04, the whole system broke and I had to install another os (Manjaro won).

That day I learned Ubuntu too can be a bit stupid.

This is where someone tracks down an upgrade path chart you didn't know existed and points out some goofy intermediary release, not an lts for some reason, that you were supposed to upgrade to first...

Man, this was a few months back. I’ve got fedora asahi Linux (Linux on an ARM Mac) and I was trying to install Pycharm to play a bit with Python. Unfortunately, they did not have it packaged for arm, so I had to download a pre compiled tar or zip folder. I test it, see that it is an assortment of bin folders and alike, and decide to put it all elsewhere so it wouldn’t get lost. So I put it on the root and merge the folders. I think immediately “wait this is stupid” and decide to get Pycharm out of there. (I was on nautilus with root privileges), so i simply Ctrl-Z outa there. It shows a warning whether I wanted to delete 4000 files, but because I am an idiot, I didn’t realise what rhay meant. So I did it. I then continue on with my life, and find myself unable to open apps. I was fairly confused, as the apps I already had open still worked. I decide to try to restart the laptop. It is when I see that there is no restart button anymore that I realise what I did, and I just think to myself. I’ll be dammed if this survives a restart, im already screwed so it doesn’t matter. (It didn’t survive the reboot, had to install from scratch. At least an excuse to use the K desktop environment)

I dont remember what I did when I was stoned. The next day I tried to do normal sudo dnf install and it doesnt recognize any command anymore. I tried restarting it and I cant login anymore because the login scripts dont work. Not that funny but just happened and weirdest way I have broke it

One day on my main Arch installation I created a container inside a directory, and "booted" into it by using systemd-nspawn. When I was done with it I decided to do a rm -rf / inside the container just to be funny. Then I noticed that my DE on the host froze and I couldn't do anything. Then I realized that systemd-nspawn mounts some important host's directories on the container, and I deleted those when I did the rm -rf /. I didn't lose anything, but it was scary.

It was my first time using a Linux GUI. I was comfortable with CLI, but it was my first time having it installed on a laptop instead of just sshing into a server somewhere.

So naturally, instead of learning how the GUI worked, I tried changing it to be exactly like Windows. I was doing things like making it so I could double click shell scripts and other code files and they would run instead of opening them up in an editor. I think you see where this is going, but I sure as hell didn't.

Well, one of my coworkers comes over and asks me to run this code on this device we were developing. We were still in the very early stages of development, we didn't even have git set up, so he brought the code over on a USB stick. I pop it into my laptop. I went to check it once by opening it in an editor by double clicking on it... Only it ran the code that was written for our device on my laptop instead of opening in an editor.

To this day, I have no idea what it did to fuck my laptop so bad. I spent maybe an hour trying to figure out what was wrong, but I was so inexperienced with Linux, that I decided to just reinstall the OS. I had only installed it the day before anyway, so I wasn't losing much.

At one point I had the coolest Ventoy USB; CyberRe, LABEL=hakr. But then I got a new computer and apparently the ssd was /dev/nvme0n1 instead of /dev/sda. While I was installing Arch, When I created a new GPT partition on /dev/sda, it wiped my beautiful Ventoy 😢

Built a new desktop, backed up everything on my old laptop, next step was to format an Arch installer USB. Instead of formatting the USB, I formatted my laptop's /boot partition. No big loss since I had the backup and was done with that old toaster, but oops.

I've literally done the rm -rf / thing. I thought I was in a different subdirectory, but I was in / and did rm -rf .

When it didn't return after half a second, I looked at the command again and hit CTRL+C about 20 times in the span of 3 seconds.

I had to rebuild the install, but luckily didn't lose anything in /home.

sudo usermod -a cdrom

Forgot the -G and wasn't sudo anymore...

I did recover eventually, but it was not nice.

Ubuntu GUI/apt fail

Back when I used ubuntu, Unity was stuck with old gnome packages. This meant that the version gnome-terminal packaged with ubuntu (up to at least 18.04) didn't have text reflow on window size changes.

You could add the upstream sources, upgrade the specific text reflow package only, and then disable the sources.

I forgot to disable the sources, or typed dist-upgrade (this happened multiple times...). Broke the whole desktop/lightdm setup with half upgraded packages, and half removed packages (for preparation to install new versions). Way easier to reinstall the os than to disentangle. Unity was a mess then anyway.

Moral: Actually read the package change summaries when doing updates/removes/installs, and [ y/N ] means actually check what the fuck you think you're agreeing to.

BtrFS snapshots for idiots

I've also run automated snapshots on my btrfs partition, then run out of space doing multi-hop system upgrade on fedora (dnf has a plugin that creates a snapshot every time it kicks in.

You can imagine there were many changes happenning per snapshot, and I effectively could have rolled back 4 major fedora versions... Til I ran out of space.

I couldn't get a replacement drive in time, and I had an hour to rebuild my laptop before needing to be on a customer site, so sadly I couldn't preserve my drive for later investigation. My best guess is the high-water-mark was configured incorrectly, and somehow it was able to 'write' data past the extents of the filesystem.

Rollback did work for my home partition, but I had to mount it from another OS to get it to work - so no data loss!

By that time I'd already reinstalled the os to the root partition/subvolume however, so I couldn't determine the exact cause of failure :(

Moral: Snapshots are not backups, and 'working' is not 'tested'

I once did an apt-get upgrade in the middle of when debian testing was recompiling all packages and moving to a new gcc version. I get it, using testing invites stuff like this. But come on, there should at least be a way to warn people beforehand.

That's kinda weird: shouldn't they recompile everything first and then replace repos' contents?

Me: I want to change my car tire

Car: Hey, your car is going at 60 mph now. Do you want to change your tire now?

Me: Is it not possible?

Car: It's your car, anything is possible with enough effort. As per Google one guy managed to change a tire of a bullock cart while it was moving at 2 mph.

Me: Sounds good. Let's gooo!

This is the experience for Linux tinkerers.

In my case it was:

Me: I want to change my car tire, and i naturally assume we are parked safely in the garage. This is a routine maintenance thing after all.

Car: Sure thing! bork

Me: Umm, why are wrapped around a tree?

Car: Well, we were currently going 60mph, and we posted about it on this website.

Me: Why is there no warning that tells me that doing maintenance now will crash my car?

Car: Well like i said, there is, and it is on this website you should have gone to.

and we posted about it on this website.

In my personal experience, these sort of things happen rarely, unless you are using some sort of rolling-release distribution. For all my mission-critical docker apps, I wait for at least a week after a major update has been pushed and check the dev website.

I was trying to extract some files from a a Linux image of one of those ARM boards. It was packed into the cpio format, and I had never used the format before. Of course I was trying to extract to a root owned directory and I sudo'ed it. I effed up the command and overwrote all my system directories (/bin, /usr, /lib, etc...). Thankfully I had backed up my system recently and was able to get it working again.

@fl42v I have thousands from my early days, but my only recent-ish one was pretty funny.

On an Arch install that hadn't been updated for a while, in a rush, had an app that needed OpenSSL 3. Instead of updating the whole system, I just updated the openssl package.

*Everything* broke immediately. Turns out a lot of stuff depends on openssl. Who knew?

To fix, booted to the arch installer, chrooted into my env, and reverted to the previous version of the package — then updated properly.

I set up a progressive backup of my home folder... to my home folder. By the time I got home that day it was impossible to log in because there was no room to create a login record. Had to fix that by deleting the backup file using a live CD.

I was new to Linux, I made the not so calculated decision to use manjaro as my daily, deleted xorg to in an attempt to reinstall xorg to then hopefully fix the stuttering. Everything went wrong, no display obviously, /boot/ files where corrupt. I now use arch and am wiser

I ran firejail config or something, which replaces a lot of home directory app files. Not sure if binaries or desktop entries.

But things broke, randomly, screenshots not working, not even inside firefox etc. I reinstalled the system and imported the home folder... and it was there again!

I cant remember anymore... Let me explain ... My first computer was with at-the-time-very-new windows xp, using primary for games, after some time it got bloated with stuff so i had to reinstall again and again over time. Then i discovered redhat,centos and debian... I started heavily distro hopping. My passion for software grew to the point that I was installing new software on daily basis, just to explore new things. But nothing seemed stable enough, ubuntu, fedora, sabayon, gentoo, arch... And their derivatives all broke under my fingers to the point that i had to do more fixing than discovering new software, I took it as a challenge and continue. At around the time of university I discovered NixOS, as with any new technology I went head on with it. It took a lot of trial and error since at the time there were no documentation for any of it. I spent months reading the code, but I never gave up, since what I have found was a gem. I found the OS that is resistant to my curiosity, I just cant seem to be able to break it. Now I use NixOS everywhere that I can, even on my work computer. I do not need to reinstall after initial installation. Well... only when hardware fails...

Accidentally executed a JPEG (on an NTFS partition) and the shell started going crazy. reboot was not successful =[

Once I succumbed to a proprietary software's allure, post-usage, I felt like a digital pariah! To rid myself of the taint, I wiped my system clean – reinstall time!

I had issues with a new version of glibc that prevented me from working on music in Ardour on Manjaro. I then proceeded to force-downgrade glibc (in the hopes of letting me get back to work) and that broke sudo and some other things, which I found out after rebooting. That was an interesting learning experience. Now I snapshot before I do stupid stuff. :]

It was only in a container on a Chromebook, but I'll share it anyway. One time, I had installed Android Studio but found it mildly annoying that I got a line when using apt about Android Studio and some error on a certain line of this one file. I believe the file was something related to dpkg, and after changing some things within the file, I seemed to have broken apt. Luckily, I had a backup, but it was a few days old, so I had to reinstall some apps.

Not really a "braking my linux setup", but still fun as hell! Back in university, a friend of mine got a new notebook at a time... we spent the night at the university hacking and they wanted to set the notebook up in the evening. They got to the point where they had to setup luks via the cryptsetup CLI. But they got stuck, it just wouldn't work. They tried for HOURS to debug why cryptsetup didn't let them setup LUKS on the drive.

At some point, in the middle of the night (literally something like 2 in the morning) they suddenly JUMPED from their seat and screamed "TYPE UPPERCASE 'YES' - FUCK!!!"

They debugged for about six hours and the conclusion was that cryptsetup asks "If you are sure you want to overwrite, type uppercase 'yes'". ... and they typed lowercase. For six hours. Literally.

The room was on the floor, holding their stomach laughing.

About a year ago I somehow fucked up installing a new window manager on my tablet so badly I had to start from scratch - to this day I have no idea what happened there, but it just wouldn't boot properly or anything after that 🤷 I needed it for school pretty quickly though so my top priority was getting it working again, so I set up a fresh install instead of continuing to fuck around.

Not the same level of destruction, but I fucked up my first ever install a couple months in trying to resolve dependencies related to python and wine, which is why I'm more interested in sandboxing whenever feasible these days. After only two months I guess I had been fucking around with linux long enough to have a little too much unearned confidence, lol

Back when I started using Linux, I really wanted something that was super different from windows (I used Gnome 3 for like 3 years). I decided one day to try out Fedora cause, hey, I can live on the bleeding edge.

Second day I had it installed, I was having issues with the audio. Decided to try reinstalling pulse. Apt autoremoved it and somehow completely nuked the entire GUI. Stuck in terminal mode, I found that I had no ethernet to connect to, nor could I figure out how to connect to a wifi network with a password or download packages to a USB. After a couple hours, I gave up, wiped the drive, and went back to Mint.

Nowadays I'm happier in my little comfort zone.

Same thing happened to me! I was on Ubuntu, trying to replace pulse and when it got removed instantly kicked me to the terminal. Eventually I fixed it but now I also just Mint, lol

A regular update I guess...

I had a few programs for various things and one was simply an extension from the gnome extension manager. I updated it and my screen turned black and I couldn't get it back. I had to revert to a previous version, then uninstall everything until I figurd out what caused it.

It took several hours.

I'm not sure how funny this will be, but here's how I broke my system twice in a single case. Step by step:

  1. Migrated from Manjaro KDE to EndeavourOS KDE. Kept the previous home directory.
  2. After a few updates, there was a problem with Plasma. Applications were not starting from the panels or the .desktop files (they worked from the terminal. The terminal emulator was in startup and worked that way)
  3. After a few google searches, found out that downgrading glibc would do something, so downgraded... Worked for a while
  4. While using pacman -Syu, I always checked for warnings (foolishly thinking that the downgraded and ignored glibc would cause a pacman warning if it broke dependencies) and there were none. So, the updated OS stopped working due to unmatched glibc. BREAK 1
  5. To fix it, I opened one of my multiple boots (another EndeavourOS) and made a script using pacman -Ql and cp to copy new glibc related files into the broken system (because I was too lazy to learn how to do it the correct way with pacman and chroot didn't work because glibc is needed by bash).
  6. Turned out the script I made was wrong and I hadn't checked the intermediate output from pacman -Ql, which was telling cp to copy the whole /etc /usr and other directories. (just if I hadn't given the -r to cp) BREAK 2

In the end, I just made a new installation, this time with a new home and hand-picked whatever settings I wanted from the previous home, Viva la multi-HDD

I've had the typical disasters with partition tables and boot loader mixups, but the one I keep coming back to is updating my Nvidia drivers too eagerly. Whether something gets messed up with an external monitor, or the laptop starts resisting switching away from the integrated GPU, or an electron app I use regularly that makes heavy use of 3D acceleration breaks, or I just need to bump the driver version in a reproducible system state record... it's just bad news.

The first time I wanted to try Linux I did by installing elementary OS in dual boot mode (with windows) and everything went well, I played with it a bit and then I returned to Windows..

So, few days after that I realize that I have a lot of space in the Linux partition and I didn't have plans to use it anymore so I go to drive's & partition's manager on windows to delete my elementary OS partition..

Oh Lord when I restarted my PC, grub was showing nonsenses and I couldn't boot on windows again, I was in panic, I spent the rest of the day trying to fix grub to boot windows. At the end of the day I did it and save all my files and I uninstall grub properly, but what a day 😂

I wanted my top bar in DWM toshow the time, so I put the script directly into the .xinitrc file instead of the path to the script.

I used to work at this place that had a gigantic QNX install. I don't know if QNX that we used back then had any relation to q&x now They certainly don't look very close.

It was in the '90s and they had it set up so that particular nodes handled particular jobs. One node to handle boot images and serve as a net boot provider, one node handled all of the arcnet to ethernet communication, one node handled all the serial to mainframe, a number of the nodes were main worker nodes that collected data and operated machinery and diverters. All of these primary systems were on upper-end 386s or 486s ,they all had local hard disks.

The last class of node they called slave nodes. They were mainly designed for user data ingest, data scanning stations, touch screen terminals, simple things that weren't very high priority.

These nodes could have hard discs in them, and if they did, they would attempt to boot from them saving the net boot server a few cycles.

If for some reason they were unable to boot from their local hard drive, They would netboot format their local hard drive and rewrite their local file system.

If they were on able to rewrite their local file system they could still operate perfectly fine purely off the net boot. The Achilles heel of the system was that you had no idea that they had net booted unless you looked into the log files. If you boot it off your local hard drive of course your root file system would be on your local hard drive. If you had net booted, and it could not rebuild your local file system, your local root file / was actually the literal partition on the boot server. Because of the design of the network boot, nothing looked like it was remotely mounted.

SOP for problems on one of the slave nodes was to wipe the hard disk and reboot, in the process it would format the hard drive and either fix itself or show up as unreliable and you could then replace the disc or just leave the disc out of it. Of course If the local disk had failed and the box had already rebooted off netboot without a technician standing there to witness it, rm -Rf would wipe out the master boot node.

I wasn't the one that wiped it, but I fully understand why the guy did.

Turns out we were on a really old version of QNX, we were kind of a remote warehouse mostly automated. They just shut us down for about a week. Flew a team out. Rebuilt the system from newer software, and setup backups.

Accidentally deleted system Python, which on GNOME meant my DE was toast as well. Luckily very freshly set up, so no harm done.

Related note, add this in your shell profile:

bash
export PIP_REQUIRE_VIRTUALENV=true

proper scripting language
set PIP_REQUIRE_VIRTUALENV true

What does it do? Is it some kind of failsafe?

Makes it so when you install packages with pip, it will only work if it's using a virtual environment. This keeps any installed packages separate from ones your system uses.

If you want to learn about python virtual environments, check this out.

I recently broke the networking stack by uninstalling ca-certificates

I was using a slightly risky command to delete unneeded packages, and for some reason ca-certificates was on the list

At least the fix was simple. Boot the rescue iso and reinstall them

and a need to find another PC to flash an archiso to a flash drive ('cause ofc I didn’t have one at the time).

you can do that from your phone using etchdroid

i don't remember ever breaking my system in a terrible way, but when i started using linux (with linux mint) i uninstalled ca-certificates and i think that uninstalled the whole DE

I was running Fedora. Something like 27 or so. I needed drivers. I don't remember if it was AMD or Nvidia, but they were only available on RedHat.

So I downloaded the RedHat drivers for the GPU and forced it to install. It worked! It was great.

Then when I updated the distro to the next release... everything failed. It was dropping into grub, but no video was output. Ooof.

So I ended up enabling a terminal console and connecting to it via a serial port to debug. I had to completely uninstall that RPM and I was never happy that it was properly gone. So a few months later I ended up reinstalling the whole OS.

On the plus side, I learned a lot about grub and serial consoles. Worth it.

sudo apt upgrade -y

To this day I can’t figure out why it killed the GUI and all terminal commands on a Mint install…

I’m relatively new to Mint, but I thought that sudo apt update just checked for updates and sudo apt upgrade -y was for actually installing the updates. I don't see why that would break it though.

You're right, I messed up - I always switch between the two, because "update" makes more sense in my head. I fixed the text.

I stay away from apt. apt-update for me has never messed like apt has.

The only time was within a VM. I accidentally wrote

rm -rf ./* while my cwd was /

I use absolute paths with -rf now, to prevent the error again.

Every other breakage I had was with apt shitting itself. It has always been fixable just annoying.

I now use Fedora, to prevent the error again.

Just yesterday I overwrote some pacnew files and borked user authentication for myself. Very rough time

Actually, I have a story that I'd consider an achievement even though it was extremely stupid and by all accounts should've bricked the system but didnt.

So I was on windows and wanted to install linux as a dual-boot on the main drive. The problem was that my mobo didnt like this particular and the only flash drive I've had, dropping it out mid-boot, before I got any usable terminal, so a usual install method wasn't an option. So I had this crazy idea to start a vmware vm in windows and pass the linux iso and the boot drive directly to it and try to install it live over the running system. Unfortunately, vmware guys thought of this and there's a check that disallows passing the boot drive to vms. So i created a bunch of .vmdks for another drive and fiddled with them in notepad until I somehow managed to trick vmware and at some point it started booting the same windows copy that I was sitting on. I quickly powered it off, added the linux iso and proceeded to install like I usually would. It did involve some partition shuffling, but, somehow, it went smoothly, linux installed, grub caught on, and even windows somehow survived, even though it was physically moved around on the disk. It serms that vmware later patched this out, because later in an attempt to re-create the trick of running the same copy of windows twice, but after updates to both windows and vmware, I was met with the same old error that boot drive is not allowed when trying to add that same virtual drive I had laying around.

I had a similar setup once. Dualboot, plus the VM with the same physical disk, to access windows, while running linux.

All it took was a small distraction... I've missed the grub timeout, and accidentally booted the same ubuntu partition in a VM that was running on the real HW. To shreds...

I had a similar debacle, when I managed to corrupt a btrfs file system to point it wouldn't mount again...

I was preparing it to have as my main system on bare hardware. I had accidentally mounted the same block device simultaneously in the host and guest: kablamo silent corruption and all 5 hours of progress lost.^*^ :(

*shred the guest VM, host was ok.

Allrighty, now we officially need a program (I'm hesitant to call it malware since technically it's for the user's good XD) that covertly replaces a running copy of windows with linux... Besides, I think it was possible before to install stuff like Ubuntu directly from windows?

A few years ago I was having obscure audio problems on Ubuntu so I tried replacing pulseaudio with pipewire. I was feeling pretty cocky with using the package manager so I tried

sudo apt install pipewire

Installed successfully, realized nothing changed, figured maybe I had to get rid of pulseaudio to make it stick.

sudo apt remove pulseaudio

Just two commands. Instant black screen, PC reboots into the terminal interface. No GUI. Rebooting again just brings me back to the terminal.

I fixed it eventually, but I'm really not very computer literate despite using Linux, so I was sweating bullets for a minute that I might have bricked it irreversibly or something.

I feel like you can fix linux as quickly as you can fuck it up (as long as you know what you did)

I was testing a custom initramfs that would load a full root into a ramdisk, and when I was going to shut down I tried to run rm -rf --no-preserve-root / to see what would happen, since I was on a ramdisk anyway. The computer would not boot after that because it nuked the UEFI options.

On arch, UEFI boot vars are mounted at /sys/firmware/efi/efivars. It's unwise to rm -rf them....

I can't remember what I did to break it, but back when I was in high school I was tinkering right before class and rendered my laptop unbootable. I booted into an Arch Linux USB, chrooted into my install, found the config file I messed with, then reverted it. I booted back into my system and started the bell ringer assignment as quickly as I could. I had one minute left when the teacher walks by, looks at it, and says that I did a really good job. She never knew my laptop was unbootable just 2 minutes earlier.

I suppose it doesn't quite qualify as breaking the system in a funny or stupid way but it certainly was one of those stupid things that was easy to fix after a ton of trouble shooting, ignoring the issue for a while and trying to fix it again.

So i had an old pc where I had a failed hard drive which I replaced. Obviously I also accidentally unplugged my optical disc drive and plugged it back in. Now that failed drive was just a data drive so the system should have booted up no problem since the os was on a SSD but instead it got a kernel panic and got stuck at boot. Since it was late I left it at that and came back to that the next day where it would still not boot. So I unplugged the disc drive and looked up what it could be. Tried a ton of different possible solutions but every time I added that disc drive it would panic.

I eventually kind of gave up and just didn't use that disc drive at all and just had it as a paperweight in the system. Unplugged and all that. When my replacement SSDs for my old data drive and backup drive came in I tried again to get that optical drive working but to no avail. So I unplugged it again, got it all set up and ran into another issue where for some reason Linux couldn't properly use my backup SSD. So I investigated that as well and trough some miracle found a post on the forum from my Mainboard manufacturer... Turns out that particular Mainboard had a data retention chip on it that didn't like Linux.

So naturally I just plugged everything into the data ports that were not controlled by that chip and it all worked as intended.

Stupid dumb chip on a Mainboard, all I had to do was try the simple idea of unplugging and trying a different connector but instead I did all that other stuff first that didn't work and cost me so much of my time.

Moral of the story, when in doubt try and put stuff on different connectors and see if that fixes it. Might just be a dead connector for all you know. Or an incompatible chip on the Mainboard.

FWIW I bought that Mainboard long before I switched to Linux and didn't plan at all to switch at the time. But that's a different story.

Renaming a mount point while mounted was a fun experience in losing data back in the big box Redhat 5.0 days.

I wanted to move my Arch VM to bare metal, so I copied out all the important bits. Then I wanted to move that copy to a new drive so I could boot into it.

I THOUGHT I'd MV all the files in the Arch install's etc directory using sudo MV /etc ...

I also (somehow) mashed my install's etc with Arch's and bungled both, with no live CD to help.

I learned a thing or two about absolute file paths...

Somehow I found ways to remove and break the GUI multiple times in multiple ways in multiple distros.

Different scenarios, different times, different issues trying to "fix". My usual fix after this was always to copy what I think I still had important and then move on with a reinstall.

Recently I have been playing with ZorinOS and broke it in the same way by fidgeting with pipewire. Distro hoped to Fedora Silverblue due to the immutable filesystem. I wonder if I will break this one in a way I cannot revert it easily with rpm-ostree. I almost feel challenged.

Before I understood how to properly build and test mesa (graphics driver), I compiled it and then procedeed to manually symlink the files in the lib and lib32 directories. When I pressed enter on that ln command, the UI immediately crashed and X would no longer start after rebooting the computer. Reinstalling mesa from a virtual terminal wouldn't fix it so I just reinstalled the system. Good times :)

Can't say I have any interesting stories. Most of mine are just the head-scratching "I don't know why that didn't work; guess I need to reinstall" kind of story. Like enabling encrypted LVM on install and suddenly nothing is visible to UEFI. Or trying to switch desktop environments using tasksel and now I have a blank screen on next reboot. That lame kind of stuff.

My coworker though... he was mindlessly copy/pasting commands and did the classic rm -rf $UNSETVARIABLE while in / and nuked months of migrated data on his newly built system. He hadn't even set up backups yet. Management was upset but lenient.

I had rEFInd and GRUB installed entirely by accident, and a botched update for Arch hosed my entire EFI setup making it impossible to boot Linux or Windows w/o a LiveCD. Thankfully it self repaired once I nuked rEFInd. I ended up going back to Ubuntu, but I hate snaps. I still would recommend Arch for most Linux users who want the power windows.

rm -rf /var

I don't know what I was thinking on to type it 😅

I somehow locked myself out of sudo when trying to give my user permission to read serial devices.

Had to reinstall.

Wanted a cool bootscreen on my Nixos machine - commented out the bootloader to troubleshoot, why my meme-boot-picture wouldn't show - after rebooting, it loaded straight into the BIOS and finally realized what I had done... Was able to fix it thankfully

Relied on an AUR package for building and signing my unified kernel image... one day it was outdated and geberating the image failed, I noticed that by the fact that the system refused to boot my OS. Fixing it was done in a few minutes but boy, that was a shock :D

Guess who also checks the exact output of the kernel rebuild now before rebooting!

This thread should be renamed to 101 reasons why business give Windows or Macs to their employees.

It's not like you can't shoot yourself in the foot while using windows (not sure about macs, tho, but likely just as well). I remember breaking windows countless times while figuring out what service crap can be disabled, removing edge or defender, yada yada.

On the contrary, in my experience, if you're not actively messing with linux, it's overall more stable than windows. Like I had to install windows on an actual machine a short while ago, and it was a clusterheck. Drivers failed to auto install (touchpad/trackpoint drivers, for Chaos's sake), random bsod after an hour or so of normal use, etc. As for linux breaking on itself, I remember like 3 times that happened with me in my ~5 yrs of daily driving different distros, and 2 of those were fixable by switching to a tty (the 3rd didn't boot, as far as I remember, due to some incompatibility between bedrock and arch).

A system update broke a dependency for libre Sprite, which hasn't had an update in like two years. You can say they should but let's be real, my apps shouldn't break with an update. One of my laptop needs was portable graphics creation. This broke one of my major use cases. Yay.

Some Windows updates completely break the whole system. It's not unique to Linux based systems.