How much firmware is initializing???

DudemanJenkins@lemmy.world to Programmer Humor@programming.dev – 20 points –
39

Especially server accessible only by SSH....

I'm 150+km away from my server, with literally everything on it lol

I'm at college right now, which is a 3 hour drive away from my home, where a server of mine is. I just have to ask my parents to turn it back on when the power goes out or it gets borked. I access it solely through RustDesk and Cloudflare Tunnels SSH (it's actually pretty cool, they have a web interface for it).

I have no car, so there's really no way to access it in case something catastrophic happens. I have to rely on hopes, prayers, and the power of a probably outdated Pop!_OS install. Totally doesn't stress me out I'll just say I like to live on the edge :^)

Setup a pikvm as ipmi and you'll have at least another layer of failure required to completely lose connectivity

Hadn't heard of pikvm before. Will keep that in mind, thanks!

I can't be bothered to walk down to the basement, so practically my server is also only accessible by SSH

this week i sudo shutdown now our main service right at the end of the workday because i tought it was a local terminal.

not a bright move.

There's a package called molly-guard which will check to see if you are connected via ssh when you try to shut it down. If you are, it will ask you for the hostname of the system to make sure you're shutting down the right one.

Very usefull program to just throw onto servers.

I was making after hours config changes on a pair of mostly-but-not-entirely redundant Cisco L3 switches which basically controlled the entire network at that location. While updating the running configs I mixed up which ssh session was which switch and accidentally gave both switches the same IP address, and before I noticed the error I copied the running config to the startup config.

Due to other limitations and the fact that these changes were to fix DNS issues (and therefore I couldn't rely on DNS to save me) I ended up keeping sshing in by IP until I got the right switch and trying to make the change before my session died due to dropped packets from the mucked up network situation I had created. That easily added a couple of hours of cleanup to the maintainence I was doing

Best thing I did was change my shell prompt so I can easily tell when it isn't my machine

you mean the user@machine:$ thing? how do you have yours?

Correct!
I put a little Home icon on mine using NerdFonts.
If you are using ZSH or Fish you can do much more

Never update, never reboot. Clearly the safest method. Tried and true.

Never touch a running system
Until you have a inviting hole in your system

Nevertheless, I'm panicking every time I update my sever infrastructure...

Just had to restart our main MySQL instance today. Had to do it at 6am since that’s the lowest traffic point, and boy howdy this resonates.

2 solid minutes of the stack throwing 500 errors until the db was back up.

If you have the bandwidth... it is absolutely worth it to invest in a maintenance mode for your system, just check some flat file on disk for a flag before loading up a router or anything and then, if it's engaged, just send back a static html file with ye olde "under construction" picture.

Bonus points if your static site sends a 503 with a retry after header.

In the old days some of the servers took at hour to reboot. That was stressful when you couldn’t ping it at an hour.

Don't say stuff like that. You're gonna give me a heart attack.

The more disk you had, the longer it took. It walked the scsi bus which took forever. So if you had more disk. It took even longer.

Since everything was remote, you’d have to call hands and they weren’t technical. Also no cameras since it was the 90’s.

Now when I restart a vm or container. I panic if it’s not back up in 10 minutes.

I like how posting got fairly fast. Then we started putting absurd amounts of ram into servers so now they're back to slow.

Like we have a high clock speed dual 32 core AMD server with 1TB of ram that takes at least 5 minutes to do it's RAM check. So every time you need to reboot you're just sitting there twiddling your thumbs waiting anxiously.

I will date myself. These machines had a lot of memory as well which added to the slow reboot. I think it was 16 gigs.

The r series for IBM took forever. The p series was faster but was still slow

I'll date myself. My first PC had 500MB of STORAGE

My first pc had a tape drive.

I had a friend with one of those while I had an Atari. The Atari game would come up within a minute, but the tape took like 15 min to start.

Using a tape drive is crazy when you think about it. It was slow…. This wasn’t the big tape cartridges. It was a standard Audio tape. Not sure why they could store but it was all sequential

Never ask an engineer why lol

Source: am engineer

Meant that as what about they could store.

Why I know. Go play it and you’ll see how they did it.

I am curious who said. You know am audio take will create a great experience.

When you make a potentially system breaking change and forgot to make a snapshot of the VM beforehand...

There's always backups... Right?

.... Right?

oh there is. from 3 years ago, and some

Someone set up a script to automatically create daily backups to tape. Unfortunately, it's still the first tape that was put in there 3.5 years ago, every backup since that one filled up failed. It might as well have failed silently because everyone who received the email with the error message filtered them to a folder they generally ignored.

Initializing VPC...

Configuring VPC...

Constructing VPC...

Planning VPC...

VPC Configuration...

Step (31/12)...

Spooling up VPC...

VPC Configuration Finished...

Beginning Declaration of VPC...

Declaring Configuration of VPC...

Submitting Paperwork for VPC Registration with IANA...

Redefining Port 22 for official use as our private VPC...

Recompiling OpenSSH to use Port 125...

Resetting all open SSH connections...

Your VPC declaration has been configured!

Initializing Declared VPC...

Tbh there is nothing more taxing on my mental health than doing maintenance on our production servers.