Now that it's mostly over, how did Crowdstrike affect you?

Alpha71@lemmy.world to Ask Lemmy@lemmy.world – 72 points –
60

I was camping without cell service.

Came back to news of a massive IT outage and Biden dropping out, pretty wild how much can happen when you're off the grid

Holy crap, same here! Came back to service and phone blew up with all the news.

In my experience the opposite happens more often, you expect at least some major news but nothing of note happened and you have no notifications from people trying to get ahold of you etc. so you just feel weirdly empty because you spend so much time and effort keeping up with the world, friends and family normally.

Fridays are my cheat day. All week long I look forward to getting a big ole breakfast burrito at a local restaurant. I pull in that morning and there’s a “cash only” sign. Well I don’t have any cash on me. Ruined my damn morning.

great lesson why you have some cash on you, like enough to fill up your gas tank.

Yes it was a good reminder for me. I usually always carry cash for this reason, but I had spent my cash and was lazy about making the time to swing by the bank for more. I did do that later that day and am back to carrying cash on me.

What kind of breakfast burrito?

It’s called the big omelette burrito with brisket. It’s a big omelet, 3 eggs, cheese, meat of your choice (I go with brisket) pico de gallo, refried beans. It’s amazing. It also serves as my lunch because it’s huge.

Since I run Arch I was totally fine with it. I run Arch BTW.

We weren't able to send work orders to maintenance for a day. We sent it the next day.

I was devastated.

I work with CNC machines, and those were unaffected. But the measuring devices we use to double - check tolerances and record the fact that our parts are good went down. The computer that tracks the number of good parts and scraps for the day were also down.

So, we kept running and did more manual checks with micrometers and gages. Work slowed slightly, and record keeping had to be on paper for a while and entered manually on Monday.

Also we couldn't clock in that day, the time tracking computer was also down. So a head count was taken and also entered into eh computer once it got running

Everything opened slowly at work for a day, and a bunch of coworkers got a free paid half day off because they couldn't login.

We were coming through ATL airport on the Sunday after things had somewhat calmed down. But... Not for Delta. The entire flights board was red. Delayed, cancelled, crazy.

One person and been stuck in Atlanta since Thursday and managed to get on our flight to move one step closer to home. We got stupid stupid lucky to miss the worst of the chaos and only ended up with a few hours delay. Hauling three of our kids through airports when you don't know if any flight will even be scheduled is not a fun experience.

Made it home in a reasonable timeframe, but only by luck.

Zero affect. Not traveling, and work systems apparently aren’t protected by it … not sure if that’s good or bad 😂

Was flying back from Tokyo, heard about the outage when i landed in Bangkok. Luckily i kept my paper tickets and had no issues transferring. Probably would've been stuck there for a while if i wasn't in the habit of hoarding receipts.

From what I gather, a relatively large number of people’s laptops at work BSOD’d. A few still aren’t working. My laptop worked fine, and I definitely did not slack off all day.

Multiple 15-hour days, working from 5 AM to 8 PM. I work in healthcare, so it was essential to get most of these servers back online as soon as possible

I was camping and forgot to bring cash and the shops had card machines, so I wasn't able to send a postcard

The self checkout booths didn't work at the grocery store.

Fortunately none of the equipment I'm responsible for used that junk.

Thankfully we’re 90% Linux so not much, but one of our SAs had to patch a bunch of instances.

Got to sit around and play video games for half a day until someone from IT called and was like "yeah we need to walk you through the work around."

I had to come in on the weekend and work to fix affected PCs. I'm on salary so that was for no extra pay, with the understanding that we'd take that time off sometime during the week. Unfortunately I have too much other stuff going on, so I wasn't able to take any time off.

My company was only minimally affected, directly (not a lot of Windows machines in our org). But almost all of our partners were dead in the water for the day, so our systems all worked fine but we still couldn't actually do anything.

2 911s stretching about 5 days for a legacy product. Lots of ec2 rescues and terminated databases. Linux side was fine but ouch.

It caused me to not get a job there (I had been interviewing with them for a Linux Engineer position for a few weeks beforehand).

Other than that, it didn't affect me at all.

Remote worker with a very secured laptop. Came back from vacation to a BSOD, and after an entire weekend of teams video calls from my phone with our support team, they gave up trying to remediate since our security was so strict. Ended up getting issued a new laptop overnight, and sent mine back for reimaging. I just smiled and sipped coffee, while browsing lemmy on my system76 personal linux laptop. You literally cannot do anything off company resources, of which i had none for 3 days.

Non of our customers use it so no effect on us. Then again, I am a network engineer so even if it did it not like I could help much.

I work mostly with FortiNet gear so SSL 0 day are my bad days.

I never update my work laptop, so I was good 🤣

They're "push" updates so it's not something you have to do. Also, it would only affect you if your laptop was running Falcon Endpoint Protection.

Ah.. It's odd that my coworker's laptops were impacted yet a handful of the group weren't as they should all be identically images. I believe I was running Falcon Endpoint. I distinctly remember CrowdStrike on the taskbar but not much beyond that.

We had to jump through minor hoops on EC2 to recover a handful of servers. And once those were back online, there were some scheduled reports and similar things to get caught back up. The downstream impact could have been much worse, but we got somewhat lucky on the timing with our weekly processes.

Our poor network support guy had to go around the building and fix a couple dozen workstations. We're a "small shop, skeleton IT crew" type of place. So we were back to normal in less than a day.

I can't imagine what a nightmare it must have been for a "large shop, skeleton IT crew" type of place.

I work with a lot of healthcare vendors outside the hospital. Some of them could not get authorizations, access to their orders like medical equipment, etc. Our "e fax" was down. All medical faxing. Fortunately, everyone was understanding and we made agreements to give patients what they needed for their health and all our vendors got authorizations from insurance after the fact.... All but one dumb equipment company.

I was offline for 1/2 a day when I returned from work after 2 weeks vacation. This was 4 days later and my remote (onsite) Windows workstation was in accessible.

IT eventually had to remove it from its location and when in it for several hours to get it back up.

Then once back in I had to log back in to every internal and external website. If Chrome (work req.) didn’t save logins that would have been painful.

Got stuck in the airport for a day , missed work and my boss got mad.

I got to sit around work for like 2ish hours while they got everything back up and running.

I couldn't log into to our time recording app for about four hours, which caused me no issues whatsoever!

Our infrastructure engineering software was down, on the only office day that week. Ended up going into the field to do more inspections, and we finally almost caught up on the CAD/reports this week.

It only affected our time clock system. So we just used paper timecards for a week while IT worked on getting it back up.

Some of my clients weren't operational and it gave me a lighter day to catch up on other tasks.

Most were ready to continue on Tuesday, but delays on their end pushed them back a week in most cases. They didn't like hearing that I was pushing delivery back a week because their delays cause my delays, but thems the ropes.

That's about it, I don't use crowdstrike, my employer doesn't use crowdstrike, and it mostly meant nothing to me.

Not at all. No personal PC problems and none at work that impacted me (if any existed). Edit: and I didn't go anywhere that day that was impacted, although that seems more lucky since I did stop by a McDonalds for the first time in a month or more since it was in the shopping area my wife and I were at and several McD's were impacted in Japan, apparently.

My company's VPN was extremely spotty for a few days. Other than that though, the stuff I work on wasn't affected

I had a number of passengers complaining about their delayed flights.

A service I use at work had an outage, but it made little difference to me.

Somehow it seems to have only affected other people I've heard from on Lemmy.

Professionally, not at all. My company doesn't use Crowdstrike. Unlike one of my former employers who had thousands of systems down for days. The Field techs there made a killing in overtime.

Personally: My (54m) oldest kid (17m) was stuck at Laguardia for 10 hours. Fortunately, a great gate agent at LGA got him on a flight that evening, with a first class upgrade, to get him into an airport about 1.5 hour drive from the house. I picked him up at midnight and home by 1:30.

It didn’t at all. I refuse to run Microsoft products at home and in the cloud. At work I learned some Postgres database servers are running inside virtualized Linux on Windows hosts, which is kind of disgusting.