CrowdStrike downtime apparently caused by update that replaced a file with 42kb of zeroes
twiiit.com
…according to a Twitter post by the Chief Informational Security Officer of Grand Canyon Education.
So, does anyone else find it odd that the file that caused everything CrowdStrike to freak out, C-00000291-
00000000-00000032.sys was 42KB of blank/null values, while the replacement file C-00000291-00000000-
00000.033.sys was 35KB and looked like a normal, if not obfuscated sys/.conf file?
Also, apparently CrowdStrike had at least 5 hours to work on the problem between the time it was discovered and the time it was fixed.
You are viewing a single comment
I thought it was a security definition download; as in, there's nothing short of not connecting to the Internet that you can do about it.
Well I haven't looked into it for this piece of software but essentially you can prevent automatic updates from applying to the network. Usually because the network is behind a firewall that you can use to block the update until you decide that you like it.
Also a lot of companies recognize that businesses like to check updates and so have more streamlined ways of doing it. For instance Apple have a whole dedicated update system for iOS devices that only businesses have access to where you can decide you don't want the latest iOS and it's easy you just don't enable it and it doesn't happen.
Regardless of the method, what should happen is you should download the update to a few testing computers (preferably also physically isolated from the main network) and run some basic checks to see if it works. In this case the testing computers would have blue screened instantly, and you would have known that this is not an update that you want on your system. Although usually requires a little bit more investigation to determine problems.
It makes me so fuckdamn angry that people make this assumption.
This Crowdstrike update was NOT pausable. You cannot disable updates without disabling the service as they get fingerprint files nearly every day.
I hear you, but there's no reason to be angry.
When I first learned of the issue, my first thought was, "Hey our update policy doesn't pull the latest sensor to production servers." After a little more research I came to the same conclusion you did, aside from disconnecting from the internet there's nothing we really could have done.
There will always be armchair quarterbacks, use this as an opportunity to teach, life's too short to be upset about such things.
It doesn't help that I'm medically angry 80% of the time for mostly no reason, but even without that this would incense me because I've had 40+ users shouting similar uneducated BS at me yesterday thinking that it was personally my fault that 40% of the world bluescreened. No I am not exaggerating.
I have written and spoken phrases 'No we could not prevent this update' so many times in the last 24 hours that they have become meaningless to me through semantic satiation.