Let's blame the dev who pressed "Deploy"

Brkdncr@lemmy.world to Technology@lemmy.world – 266 points –
Let's blame the dev who pressed "Deploy" - Dmitry Kudryavtsev
yieldcode.blog
66

If capitalism insists on those higher up getting exorbitantly more money than those doing the work, then we have to hold them to the other thing they claim they believe in: that those higher up also deserve all the blame.

It's a novel concept, I know. Leave the Nobels by the doormat, please.

Wait, are you trying to say that Risk/Reward is an actual thing?

/s (kinda)

I doesn't seem unfair for executives to earn the vast rewards they take from their business by also taking on total responsibility for that business.

Moreover, that's the argument you hear when talking about their compensation. "But think of the responsibility and risk they take!"

Was there a process in place to prevent the deployment that caused this?

No: blame the higher up

Yes: blame the dev that didn’t follow process

Of course there are other intricacies, like if they did follow a process and perform testing, and this still occurred, but in general…

If they didn't follow a procedure, it is still a culture/management issue that should follow the distribution of wealth 1:1 in the company.

How could one Dev commit to prod without other Devs reviewing the MR? IF you're not protecting your prod branch that's a cultural issue. I don't know where you've worked in the past, or where you're working now, but once it's N+1 engineers in a code base there needs to be code reviews.

Oh you sweet summer child...

I hate to break it to you, but companies with actual safe rails to deploying to production do exist.

And when things go wrong, it's never the responsibility on a single dev. It's also the dev who reviewed the PR. It's also the dev who buddy approved the deploy. It's the whole department that didn't have enough coverage in CI.

I would hate to work where you developed the idea a protected main/prod branch is something novel.

Git Blame exists for a reason, and that's to find the engineer who pushed the bad commit so everyone can work together to fix it.

Blame the Project manager/Middle manager/C-Level exec/Unaware CEO/Greedy Shareholders who allowed for a CI/CD process that doesn't allow ample time to test and validate changes.

Software needs a union. This shit is getting out of control.

Or it needs to be a profession.

Licensed professional engineers are expected to push back on requests that endanger the public and face legal liability if they don't. Software has hit the point where failure is causing the economic damage of a bridge collapsing.

Software engineering is too wide and deep for licensing to be feasible without a degree program- which would be a massive slap in the face to the millions of skilled self taught devs.

Some states let some people get professional licensure through experience alone. It just ends up taking more than a decade of experience to meet the equivalent requirements of a four year degree.

Yeaaa that's not exactly a solution

Why not? It is still valuing the self education of people. It just means having a license to manage the system requires people with significant experience.

And it isn't like a degree alone is required for licensure.

Because a decade of professional experience is a long time, and doesn't value independent experience. I've been coding for over 11 years, but professionally only a couple. Also software development is very international, how would that even be managed when working with self-taught people across continents?

I agree developers should be responsible, but licensing isn't it, when there are 16 year olds that are better devs than master's graduates.

Do we allow for self taught doctors or accountants?

Also, these regulations aren't being developed for all servers, just ones that can cause major economic damage if they stop functioning. And you don't need everyone to be qualified to run the service. How many water treatment pants are there where you only have a small set of managers running the plant, but most people aren't licensed to do so?

Do we allow for self taught doctors or accountants?

Is this limitation good? Furthermore, software development is something very easy to learn with 0 consequences.

Also, these regulations aren't being developed for all servers, just ones that can cause major economic damage if they stop functioning.

Many of those have excellent self-taught devs developing software for them- I know some of them.

And you don't need everyone to be qualified to run the service. How many water treatment pants are there where you only have a small set of managers running the plant, but most people aren't licensed to do so?

  1. Maintenance is very different from software development.

  2. Good software development requires at minimum expansive automated testing...

Do you trust anyone claiming to be self taught with the responsibility to design something that, if it fails, will cause billions in economic damage? Not the people you know, anyone who claims to be self taught?

I shouldn't be trusted if I hire without vetting and hand over control of a massive project to someone off the street without any QA controls, code review, or automated testing.

2 more...
2 more...
2 more...
2 more...
2 more...

Sounds like the kind of oversight that tends to come with a union and the representation therein.

1 more...

And in the cases of healthcare and emergency dispatch, loss of life as well.

3 more...
14 more...

"George Kurtz, the CEO of CrowdStrike, used to be a CTO at McAfee, back in 2010 when McAfee had a similar global outage. "

I do wonder how frequent it is that an individual developer will raise an important issue and be told by management it's not an issue.

I know of at least one time when that's happened to me. And other times where it's just common knowledge that the central bureaucracy is so viscous that there's no chance of getting such-and-such important thing addressed within the next 15 years is unlikely. And so no one even bothers to raise the issue.

Reminds me of Microsoft's response when one of their employees kept trying to get them to fix the vulnerability that ultimately led to the Solar Winds hack.

https://www.propublica.org/article/microsoft-solarwinds-golden-saml-data-breach-russian-hackers

And the guy now works for CrowdStrike. That's ironic.

I’m imagining him going on to do the same thing there and just going “why am I the John McClain of cybersecurity? How can this happen AGAIN???”

His next job might look at his job history and suddenly decide that the position is no longer available.

Hey man, look, our scrums are supposed to be confidential. Why are you putting me on blast here in public like this?

1 more...

If you don't test an update before you push it out, you fucked up. Simple as that. The person or persons who decided to send that update out untested, absolutely fucked up. They not only pushed it out untested, they didn't even roll it out in offset times from one region to the next or anything. They just went full ham. Absolutely an idiot move.

The bigger issue is the utterly deranged way in which they push definitions out. They've figured out a way to change kernel drivers without actually putting it through any kind of Microsoft testing process. Utterly absurd way of doing it. I understand why they're doing it that way but the better solution would have been to come up with an actual proper solution with Microsoft, rather than this work around that seems rather like a hack.

This is the biggest issue. Devs will make mistakes while coding. It's the job of the tester to catch them. I'm sure some mid-level manager said "let's increase the deployment speed by self-signing our drivers" and forced a poor schmuck to do this. They skipped internal testing and bypassed Microsoft testing.

That mid-level manager also has the conflicting responsibility to ensure the necessary process is happening to reliably release, and that a release can’t happen unless that process happened. They goofed, as did their manager who prized cheapness and quickness over quality

Let them all from the CEO down suffer the consequences that a free market supposedly deals

We still don’t know exactly what happened, but we do know that some part of their process failed catastrophically and their customers should all be ready to dump them.

I'm quite happy to dump them right now. I still don't really understand why we need their product there are other solutions that seem to work better and don't kill the entire OS if they have a problem.

Wild theory: could it have been malicious compliance? Maybe the dev got a written notice to do it that way from some incompetent manager.

While that’s always possible, it’s much more likely that pressures to release quickly and cheaply made someone take a shortcut. It likely happens all the time with no consequences so is “expected” in the name of efficiency, but this time the truck ran over grandma.

We have rules against that at my job...literally if God came down and wrote something out of process that'll be a no big guy.

If I'm responsible for the outcome of the business, I want a fair share of the profits of the business.

I get that it's not the point of the article or really an argument being made but this annoys me:

We could blame United or Delta that decided to run EDR software on a machine that was supposed to display flight details at a check-in counter. Sure, it makes sense to run EDR on a mission-critical machine, but on a dumb display of information?

I mean yea that's like running EDR on your HVAC controllers. Oh no, what's a hacker going to do, turn off the AC? Try asking Target about that one.

You've got displays showing live data and I haven't seen an army of staff running USB drives to every TV when a flight gets delayed. Those displays have at least some connection into your network, and an unlocked door doesn't care who it lets in. Sure you can firewall off those machines to only what they need, unless your firewall has a 0-day that lets them bypass it, or the system they pull data from does. Or maybe they just hijack all the displays to show porn for a laugh, or falsified gate and time info to cause chaos for the staff.

Security works in layers because, as clearly shown in this incident, individual systems and people are fallible. "It's not like I need to secure this" is the attitude that leads to things like our joke of an IoT ecosystem. And to why things like CrowdStrike are even made in the first place.

As a counterpoint to this articles counterpoint, yes, engineers should still be held responsible, as well as management and the systems that support negligent engineering decisions.

When they bring up structural engineers and anesthesiologists getting "blame" for a failure, when catastrophic failures occur, it's never blaming a single person but investigating the root cause of failures. Software engineers should be held to standards and the managers above them pressuring unsafe and rapid changes should also be held responsible.

Education for engineers include classes like ethics and at least at my school, graduating engineers take oaths to uphold integrity, standards, and obligations to humanity. For a long time, software engineering has been used for integral human and societal tools and systems, if a fuck up costs human lives, then the entire field needs to be reevaluated and held to that standard and responsibility.

This is why every JR Engineer I've mentored is handed a copy of Sysadmin Code Ethics day one along with a copy of Practice of System and Network Administration.

We really need a more formal process for having the title of engineer and we really need a guild. LOPSA/USENIX and CWA are from what I can tell the closest to having anything. Because eventually some congress person is going to get visited by the good idea fairy and try to come down on our profession. So it's up to us to get our house in order before they do.

CTOs that outsourced to a software they couldn't and didn't auidit are to blame first. Not having a testing pipeline for updates is to blame. Windows having a verification system loophole is too blame. Crowd strike not testing this patch are too blame. Them building a system to circumvent inspect by MS is their fault.

Now with each org there is probably some distribution of blame too, but the execs in charge are first and for most in charge...

Honestly this is probably enough serious damages in some cases that I suspect ever org to have pay some liability for the harms their negligence caused. If our system is just that is, and if it is not than we have a duty to correct that as well

I've said it before and I'll say it again.

Corporate culture is a malicious bad actor.

Corporate culture, from management books to magazine ads to magic quadrants is all about profits over people, short term over stability, and massaging statistics over building a trustworthy reputation.

All of it is fully orchestrated from the top down to make the richest folks richer right now at the expense of everything else. All of it. From open floor plans to unlimited PTO to perverting every decent plan whether it be agile or ITIL or whatever, every idea it lays its hands on turns into a shell of itself with only one goal.

Until we fix that problem, the enshittification, the golden parachutes, and the passing around of horrible execs who prove time and time again they should not be in charge of anything will continue as part of the game where we sacrifice human beings on the Altar of Record Quarterly Profits.

Blame the dev who pressed "Deploy" without vetifying the config file wasn't full of 0's or testing it in Sandbox first.

If the company makes it possible for an individual developer to do this, it's the company's fault.

Exactly. All of our code requires two reviews (one from a lead if it's to a shared environment), and deploying to production also requires approval of 3 people:

  • project manager
  • product owner
  • quality assurance

And it gets jointly verified immediately after deploy by QA and customer support/product owner. If we want an exception to our deploy rules (low QA pass rate, deploy within business hours, someone important is on leave, etc), we need the director to sign off.

We have <100 people total on the development org, probably closer to 50. We're a relatively large company, but a relatively small tech team within a non-tech company (we manufacture stuff, and the SW is to support customers w/ our stuff).

I can't imagine we're too far outside the norms as far as big org deployments work. So that means that several people saw this change and decided it was fine. Or at least that's what should happen with a multi-billion dollar company (much larger than ours).

Are "product" (PM, PO) and "engineering" (people who write the code) one and the same where you work? Or are they separate factions?

No, separate groups. We basically have four separate, less-technical groups that are all involved in some way with the process of releasing stuff, and they all have their own motivations and whatnot:

  • PM - evaluated on consistency of releases, and keeping costs in line with expectations
  • PO - evaluated on delivering features customers want, and engagement with those features
  • QA - evaluated on bugs in production vs caught before release
  • support - evaluated on time to resolve customer complaints
  • devs - evaluated on reliability of estimates and consistency of work

PM, PO, and QA are involved in feature releases, PM, QA, and support are involved in hotfixes. Each tests in a staging environment before signing off, and tests again just after deploy.

It seems to work pretty well, and as a lead dev, I only need to interact with those groups at release and planning time. If I do my job properly, they're all happy and releases are smooth (and they usually are). Each group has caught important issues, so I don't think the redundancy is waste. The only overlap we have is our support lead has started contributing code changes (they cross-trained to FE dev), so they have another support member fill in when there's a conflict of interest.

My industry has a pretty high cost for bad releases, since a high severity bug could cost customers millions per day, kind of like CrowdStrike, so I must assume they have a similar process for releases.

That's not how any of this worked. Also not how working in a large team that develops for thousands of clients works. It wasn't just one dev that fucked up here.

Crowd Strike Falcon uses a signed boot driver. They don't want to wait for MS to get around to signing a driver if there's a zero day they're trying to patch. So they have an empty driver with null pointers to the meat of a real boot driver. If you fat finger a reg key, that file only containing the 9C character, points to another null pointer in a different file and you end up getting a non bootable system as the whole driver is now empty.

If you don't understand what I just said here's some folk that spent good time and effort to explain it.

https://www.youtube.com/watch?v=pCxvyIx922A&amp;t=312s

https://www.youtube.com/watch?v=wAzEJxOo1ts

You can blame his leadership who did not authorise the additional time and cost for sandbox testing.