nusl 13 days ago

I feel that Jia Tan would also have inspired other, perhaps less well-funded, bad actors to try similar things.

Often you'll see one attack succeed or semi-succeed, but the actual plan was a good one, and it gets copied by less sophisticated attackers that can't come up with their own plans but copy other attacks that have had some success.

While technical knowledge is required to execute this, the social engineering aspect is more accessible. Backdoors aren't too hard to find and add even if you're not super technical. This means that the volume of such attacks is likely to increase and also be harder to defend against.

I'm really hoping that the XZ attack makes folks more wary though given enough time, the feeling of urgency will pass and folks may become more lax again.

Major vendors like GitHub can have a good impact here in adding detection and tools to enable detection and prevention of bad actors in this manner, though a lot of OS doesn't work around centralised tools like this and requires people to keep being vigilant.

I kinda feel that it's inevitable for such an attack to take place in the future and have similarly devastating consequences. Things are just too decentralised and independent (which is a good thing) to cover all bases effectively.

We can hope that maintainers of very commonly-used packages/tools are vigilant but people are people.

  • Piskvorrr 13 days ago

    That's one side of the issue. The other: where have such takeovers succeeded, and continue undetected, because the attacker did not (yet?) make a 500000-millisecond-blunder?

    • gpjt 13 days ago

      Absolutely. The framing around this is mostly about "when is the next one", but given the unlikely combination of errors and circumstances that led to this one being discovered, why would we think that it's the first time? It seems likely that this has been going on for some time, against various projects, and this is just the first to be noticed.

    • sadjad 13 days ago

      500000-microsecond-blunder*

Lockal 13 days ago

> It’s not surprising that the OpenJS foundation and the JavaScript ecosystem, which is used by over 95% of all websites, would be the next target of threat actors

I'm certainly glad that the Executive Directors of the "OpenJS Foundation" and "Open Source Security Foundation" feel so important, but something tells me that this is a bit of... an exaggeration... Maybe they should join their efforts with "Open Website Foundation" and "Open Hardware Foundation" to cover 100% of bullshit bingo misleading naming of corporations whose necessity is unnecessary in the first place.

  • delfinom 13 days ago

    Grifters gotta grifter and earn their pay somehow.

dist-epoch 13 days ago

A vulnerability like this one sells for $1 mil.

At this point is cheaper to hire 5 full time programmers for a year to just introduce a vulnerability in an open-source project. Do it the old Google way - you work 80% of time on legit features and bug fixes to get credibility and in 20% of time you work on the backdoor.

  • realusername 13 days ago

    Especially that a normal vulnerability is exploitable by anybody who finds it.

    The state actor had then two choices when finding one, keeping it secret at the cost of leaving a lot of their own computers vulnerable or disclosing it but they cannot use it anymore.

    Backdoors on the other hand can be made reasonably locked to keep it to themselves, like the one introduced in xz. They don't have this dilemma.

    • voakbasda 13 days ago

      That’s a false dichotomy. Once a vulnerability is found (or manufactured), the attacker can patch their own systems while continuing to exploit other vulnerable systems.

      • dist-epoch 13 days ago

        Their own systems might be an entire country. You can't hide patching on that scale.

      • realusername 13 days ago

        How many government departments you can realistically reach before it leaks out somewhere though?

  • austin-cheney 13 days ago

    Sounds like it might actually be cheaper to hire people who can actually program as opposed to the more common pretenders reliant upon NPM and React, Angular, or jQuery do 98% of the job.

    • TheNewsIsHere 13 days ago

      I don’t necessarily disagree with your point, but frameworks and libraries are very useful. Their gross value could be said to be relative to whether the risks of relying on them ended up being worth it.

      A very well vetted library to do something tricky may be more reliable than reinventing the wheel.

      But to your point — a former employer of mine is the company behind a major SaaS application used everywhere from “you and I” to companies like Apple, Workday, IBM, and Disney.

      I once overheard an executive remark that “we need to do something” about the (figurative) “million NPM packages and JS libraries” that got pulled down when you compiled the server.

      • austin-cheney 13 days ago

        The reductio ad nausium of it is always fear versus measures. People who fear (fill in the blank reason) require outside parties to solve basic problems for them. This is the reason people use frameworks, because its this big thing somebody else wrote that does things you don't want to consider.

        If, on the other hand, your primary consideration is to achieve some manner of a measure you won't care what tool you use, if any, because its all arbitrary. The deciding factor is not your self-interested level of comfort but a numeric quality compared to some other numeric quality.

        This is why I abandoned my 15 year JavaScript career. Its full of cowardly people who cannot measure anything because, even when they are so capable, measuring things may reveal uncomfortable and highly contrary conclusions.

vintagedave 13 days ago

The report, which is phrased as a press release, buries the lede. Some good quotes:

> I'm wondering if perhaps the Linux ecosystem is actually a bunch of aging infrastructure. If so, that fact might be obscured by seemingly healthy contributions to the Linux kernel. There's a ton of other "basic infrastructure" that gets shipped alongside the kernel, though: coreutils, binutils, gcc, make, autotools, cmake, zlib, xz, etc etc etc. -- Camdon Cady

> Our team at Socket catches 100+ software supply chain attacks in npm, PyPI, and Go every week.

> Software engineer Andrew Nesbitt has been playing around with the concept of a "blast radius" for open source security advisories on Ecosyste.ms. He uses the CVSS score of a security advisory multiplied by the number of repositories that depend upon that package to determine the “blast radius.”

  • redserk 13 days ago

    ...Wow, gauging a "blast radius" based primarily on CVSS score sounds incredibly hamfisted and naive.

    • TheNewsIsHere 13 days ago

      I agree.

      I could see some value to a kind of measurement like that, but I agree that that particular method seems heavy handed.

      This is partly what SBOMs are meant to solve for.

      I’m also not sure calling Linux “aging infrastructure” that is obscured by activity in the Kernel is intellectually honest. Perhaps it’s better framed as a naive take. If the point is about shopping the rest of the OS (“all the dependencies” if you will) being a supply chain issue, I think the point would be better.

    • PeterisP 12 days ago

      It's probably close to the best thing that can be done with the available data, even if the available data is bad - the main flaw of such a metric is that while you could argue that the total impact is proportional to the number of machines affected, you can't reasonably argue that the impact is proportional to the CVSS score, the relationship is very non-linear.

      A vulnerability with CVSS 2.5 wouldn't have a quarter of impact of a CVSS 10 vulnerability, its impact would be insignificant.

      • redserk 12 days ago

        The problem is that it really depends on your stack/application. A score pretends to be an absolute frame of reference to work on, and this is hardly reliable to assess real risk and impact.

        A CVSS 10 on several internal services with all user input being filtered may be substantially less risky than a public facing component using a library with a CVSS 2.5 depending on your application and vulnerability.

        The only correct way is to assess risk is analyze how you utilize each component and determine if your application would run into the issue to begin with. This work cannot be hand-waved away with a simple number despite this practice being the hot fad in cybersecurity.

        • PeterisP 12 days ago

          While individual circumstances matter a lot for impact to a single organization, for estimating the global "blast impact" of a vulnerability, we'd expect that all of that would somewhat average out between all the thousands or millions of different systems and organizations running that tool, and a library with 100 times more users can be expected (all things being equal) to have 100-ish times more situations where it's used in a publicly reachable way - but the difference in average expected impact of having a million systems with a 2.5 CVSS vulnerability (likely causing zero incidents) and a million systems with a 10.0 CVSS vulnerability (likely wormable RCE) isn't 4 times, not even close to that.

woleium 13 days ago

Shouldn’t performance testing be a part of any standard change process for a distribution? You can hide in the code, but you can’t hide from the laws of physics.

  • bennyhill 13 days ago

    Please no..

    You can find some computation to optimize just like you can find memory to compact, so getting caught was largely not believing performance was going to be part of the test..

    But if you aren't in it for ulterior motives, you have predictive branching, etc, so your performance loss may be in the imagination of a CPU designer with a wrong idea of typical that strangely coincides with a uniquely written benchmark..

  • art049 13 days ago

    I thought about this quite a lot and feel like it's super important as well. Often, having the proper infrastructure/tools/resources to automate performance testing is hard. I created codspeed.io to try and fix that.

    • Atotalnoob 13 days ago

      You have a typo with “elligible” should be eligible.

      Also, on mobile why do I have to scroll to the bottom of the page to find your pricing page? Why do you not have a menu, like most websites?

      • art049 13 days ago

        Thanks for reporting the typo. Menu will be there soon.