XZ Hack - "If this timeline is correct, it’s not the modus operandi of a hobbyist. [...] It wouldn’t be surprising if it was paid for by a state actor."

Jennykichu@lemmy.dbzer0.com to Open Source@lemmy.ml – 451 points –
Technologist vs spy: the xz backdoor debate
lcamtuf.substack.com

Thought this was a good read exploring some how the "how and why" including several apparent sock puppet accounts that convinced the original dev (Lasse Collin) to hand over the baton.

68

Pretty bad is also that it intersects with another problem: Bus factor.

Having just one person as maintainer of a library is pretty bad. All it takes is one accident and no one knows how to maintain it.
So, you're encouraged to add more maintainers to your project.

But yeah, who do you add, if it's a security-critical project? Unless you happen to have a friend that wants to get in on it, you're basically always picking a stranger.

honestly these people should be getting paid if a corporation wants to use a small one-man foss project for their own multibillion software. the lawyer types in foss could put that in GPLv5 or something whenever we feel like doing it.

also hire more devs to help out!

If you think people are going to be trustworthy just because they are getting paid you are naive.

not trustworthy per se but maybe less overworked and inclined to review code more hastily, or less tired and inclined to have the worse judgement that makes such a project more vulnerable to stuff like this.

these people maintain the basis of our entire software infrastructure thanklessly for us in between the full time jobs they need to survive, this has to change.

as for trust in foss projects, the community will often notice bad faith code just like they just did (and very quickly this time, i might add!)

I guess you are using trust in a different way here. Trust in competency can vary with both volunteer and paid workers, everyone makes mistakes though. Trust that someone doesn't do something deliberately malicious is a different matter though.

1 more...
1 more...
1 more...

i can't see how paying someone would have changed anything in this scenario.

this seems to be a long running campaign to get someone into a position where they could introduce malicious code. the only thing different would have been that the bad actor would have been paid by someone.

this is not to say, that people working on foss should not be paid. if anything we need more people actively reviewing code and release artifacts even if they are not a contributor or maintainer of a piece of software.

i can't see how paying someone would have changed anything in this scenario.

we need more people actively reviewing code and release artifacts

I think you’ve answered your own question there

no, the solution is not to pay someone to have someone to blame if shit happens.

there are a bus load of people involved on the way from a git repo to actuall stuff running on a machine and everyone in that chain is responsible to have an eye on what stuff they are building/packaging/installing/running and if something seems off, it's their responsibility to investigate and communicate with each other.

attacks like this will not be solved by paying someone to read source code, because the code in the repo might not be what is going to run on a machine or might look absolutely fine in a vacuum or will be altered by some other part in the chain. and even if you have dedicated code readers, you cant be sure that they are not compromised or that their findings will reach the people running/packaging/depending on the software.

Of course you can’t be sure anyone involved, paid or not, isn’t compromised. But if you want more human effort put into a project, people need a reason to do so. Complaining that volunteer contributors don’t spend enough of their time and effort with no compensation isn’t going to solve anything. Maybe AI tools will make that work more available in the near future.

If my job didn't pay me, I would have certainly burned out years ago. For one, I'd need another job.

1 more...

I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.

In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.

Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it's a sustainable project and meets requirements like a solid ownership?

The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I've never tried to get a distro to accept my software.

Nothing I've seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don't seriously think that is the case here though - this feels very much state sponsored and very well planned)

It's good we're asking these questions. None of them are new, but the importance is ever increasing.

Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?

And who is supposed to do that work? How do you know you can trust them?

Fair point.

If the distro team is compromised, then that leaves all their users open too. I'd hope that didn't happen, but you're right, it's possible.

2 more...
  • Careful choice of program to infect the whole Linux ecosystem
  • Time it took to gain trust
  • Level of sophistication in introducing backdoor in open source product

All of these are signs of persistent threat actors aka State sponsor hacker. Though the real motive we would never know as it's now a failed project.

imagine how pissed they are. or maybe they silently alerted the microsoft guy themselves as they only did it for cash and theyd been paid

I am sure most super powers in the world can easily sink 2 years to maintain an obscure project in order to break system as important as openssh.

I doubt they will be pissed for one failure, and we can only hope there isn't more vulnerable projects out there (spoiler alert: probably many).

Hopefully shows why you should never trust closed source software

If the world didn’t have source access then we would have never found it

And if they do find it, it'll all be kept hush hush, they'll force an update on everyone with no explanation, some people will do everything in their power to refuse because they need to keep their legacy software running, and the exploit stays alive in the wild.

open source software getting backdoored by nefarious committers is not an indictment on closed source software in any way. this was discovered by a microsoft employee due to its effect on cpu usage and its introduction of faults in valgrind, neither of which required the source to discover.

the only thing this proves is that you should never fully trust any external dependencies.

The difference here is that if a state actor wants a backdoor in closed source software they just ask/pay for it, while they have to con their way in for half a decade to touch open source software.

How many state assets might be working for Microsoft right now, and we don't get to vet their code?

"Paid for by a state actor" Yes, who knows.

  • Could be a lone "black hat" or a group of "black hats". Who knows.

  • Could be the result of a lot of public criticism in the news regarding Pegasus spyware. Who knows.

  • Could be paid by companies without any state actors involved. Who knows.

  • Could be a lone programmer who wants power or is seeking revenge for some heated mailing list discussion. Who knows.

The question of trust has been mentioned in this case of a sole maintainer with health problems. What I asked myself is : How did this trust develop years ago ? People trusted Linus Torvalds and used the Linux kernel to build Linux distributions with to the point that the Linux kernel became from a tiny hobby thing a giant project. At some point compiling from source code became less fashionable and most people downloaded and installed binaries. New projects started and instead of tar and gzip things like xz and zstd were embraced. When do you trust a person or a project, and who else gets on board of a project ? Nowadays something like :

curl -sSL https://yadayada-flintstones-revival.com | bash

is considered perfectly normal as the default installation of some software. Open source software is cool and has kind of produced a sort of revolution in technology but there is still a lot of work to do.

Boostrapping a full distribution from a 357-byte seed file is possible in GUIX:

https://lemmy.ml/post/8046326

If that seed is compromised, then the whole software stack just won't build.

It's an answer to the "Trusting Trust" problem outlined by Ken Thompson in 1984.

Some of the trust comes from eyes on the project thanks to it being open source. This thing got discovered, after all. Not right away, sure, but before it spread everywhere. Same question of trust applies to commercial software too.

Ideally, PR reviews help with this but smaller projects esp with few contributors may not do much of that. I doubt anyone has spent time understanding the software supply chain (SSC) attack surface of their product but that seems like a good next step. Someone needs to write a tool that scans the SSC repos and flags certain measures like the # of maintainers.

PS: I have the worst allergies I've had in ages today and my brain is in a histamine fog so maybe I shouldn't be trying to think about this stuff right now lol cough uuugh blows nose

Any speculations on the target(s) of the attack? With stuxnet the US and Israel were willing to to infect the the whole world to target a few nuclear centrifuges in Iran.

Definitely state sponsored attack. It could be any nation - US to North Korea, and any other nation in between.

There is some indication based on commit times and the VPN used that it's somewhere in Asia. Really interesting detail in this write up.

The timezone bit is near the end iirc.

Good writeup.

The use of ephemeral third party accounts to "vouch" for the maintainer seems like one of those things that isn't easy to catch in the moment (when an account is new, it's hard to distinguish between a new account that will be used going forward versus an alt account created for just one purpose), but leaves a paper trail for an audit at any given time.

I would think that Western state sponsored hackers would be a little more careful about leaving that trail of crumbs that becomes obvious in an after-the-fact investigation. So that would seem to weigh against Western governments being behind this.

Also, the last bit about all three names seeming like three different systems of Romanization of three different dialects of Chinese is curious. If it is a mistake (and I don't know enough about Chinese to know whether having three different dialects in the same name is completely implausible), that would seem to suggest that the sponsors behind the attack aren't that familiar with Chinese names (which weighs against the Chinese government being behind it).

Interesting stuff, lots of unanswered questions still.

What is the trail of crumbs? Just some random email accounts?

This was in a big part a social engineering attack, so you can't really avoid contact.

Stuxnet was an extremely focused attack, targeting specific software on specific PLCs in a specific way to prevent them mixing up nuclear batter into a boom boom cake. Even if it managed to affect the whole world, it would be a laser compared to this wide-net.

Given how low level it is and the timespan involved, there probably wasn't a specific use in mind. Just adding capability for a future attack to be determined later.

I'd be super surprised if this was western intelligence. Stuxnet escaping Natanz was an accident, and there is no way that an operation like this would get approved by the NSAs Vulnerabilities Equities Process.

My money would be MSS or GRU. Outside chance this is North Korean, but doesn't really feel like their MO

8 more...

I had assumed it was probably a state sponsored attack. This looks like it was planned from the beginning, and any cyber attack that had years of planning and waiting strikes me as state-sponsored.

Historically there have been several instances of anarcho-communist organizations and social movements flourishing.

Most of them were sabotaged by plutocrat agents invoking violence or mischief. Often just giving an angry militants in the region some materiel support and bad intel.

What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?

I wonder how many OSS projects include backdoors that doesn't appear in performance checks

What if the unexpected SSH latency won’t be introduced, this backdoor would live?

I'm confused by this sentence. It uses future tense in the first clause and then conditional in the second. Are you trying to express something that could've taken place in the past? Then you should be using "had been". See conditional sentences.

What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?

Or are you trying to express something else?

CC BY-NC-SA 4.0

Thanks, what you wrote is what I meant:

What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?

Linux Unix since 1979: upon booting, the kernel shall run a single "init" process with unlimited permissions. Said process should be as small and simple as humanly possible and its only duty will be to spawn other, more restricted processes.

Linux since 2010: let's write an enormous, complex system(d) that does everything from launching processes to maintaining user login sessions to DNS caching to device mounting to running daemons and monitoring daemons. All we need to do is write flawless code with no security issues.

Linux since 2015: We should patch unrelated packages so they send notifications to our humongous system manager whether they're still running properly. It's totally fine to make a bridge from a process that accepts data from outside before even logging in and our absolutely secure system manager.

Excuse the cheap systemd trolling, yes, it is actually splitting into several, less-privileged processes, but I do consider the entire design unsound. Not least because it creates a single, large provider of connection points that becomes ever more difficult to replace or create alternatives to (similarly to web standard if only a single browser implementation existed).

And so the microkernal vs monolithic kernal debate continues...

its only duty will be to spawn other, more restricted processes.

Perhaps I'm misremembering things, but I'm pretty sure the SysVinit didn't run any "more restricted processes". It ran a bunch of bash scripts as root. Said bash scripts were often absolutely terrible.

I'm curious to know about the distro maintainers that were running bleeding edge with this exploit present. How do we know the bad actors didn't compromise their systems in the interim ?

The potential of this would have been catastrophic had it made its way into the stable versions, they could have for example accessed the build server for tor or tails or signal and targeted the build processes . not to mention banks and governments and who knows what else... Scary.

I'm hoping things change and we start looking at improving processes in the whole chain. I'd be interested to see discussions in this area.

I think the fact they targeted this package means that other similar packages will be attacked. A good first step would be identifying those packages used by many projects and with one or very few devs even more so if it has root access. More Devs means chances of scrutiny so they would likely go for packages with one or few devs to improve the odds of success.

I also think there needs to be an audit of every package shipped in the distros. A huge undertaking , perhaps it can be crowdsourced and the big companies FAAGMN etc should heavily step up here and set up a fund for audits .

What do you think could be done to mitigate or prevent this in future ?

Interesting to hear and it wouldn't surprise me either tbh. At least none of my systems were vulnerable apparently, which is good because I am running the latest Ubuntu LTS and latest Proxmox - if those were affected then wow this would have affected so many more people.

At least none of my systems were vulnerable apparently

none that you know of

I ran the detection script, that's why I claim that apparently my systems were not vulnerable.

Do you think Jia Tan is alive now to talk about his famous bug?

Jia Tan is most definitely not a person, just the publicly facing account of a group of people.