lionkor 2 years ago

> To make clear to everyone that I'm absolutely not joking:

> $ base64 private.key

> RBjU5k0Dfdqtyzx4ox6PfQoqrdCft/aFJieD2DQvloY=

> I'm publically leaking the key myself now. Don't trust it.

What?

https://github.com/lawl/NoiseTorch/releases/tag/0.11.6

  • exikyut 2 years ago

    Looking at the author's response to this through the lens of my own occasional anxiety I wonder if they're having a bit of a small-to-moderate breakdown over the situation.

    I find it easy to get tangled up in the pedantry of correctness sometimes, and see things like security as effectively binary (99.99999% might as well be as good as 0%). Simplifying things down that way makes for a very mentally straightforward (black and white) way of looking at things but simply does not gel with human imperfection in practice.

    I accidentally left Xvnc running "for a quick thing" on my VPS once to find it mining Monero when I returned later and wondered why it was a bit slow. That taught me a few good lessons - like the security ramifications of the "once you've started you're on a roll" mental intertia thing, which can intuitively turn something I anticipate to maybe take 30min into something longer once I'm over the hurdle of starting it; but it also taught me that I'm a bit OCD about intrusions, and (despite the fact that whoever found the VPS was likely more interested in XMR than the boring files on it) I still completely nuked it and reinstalled from scratch... because of course I had NOPASSWD in sudoers :/.

    So I empathize with total local device compromise, which would hurt. But by the same token, it may be singularly effective to grab a favorite/preferred device, reinstall just that, turn everything else off (and maybe even notifications as well) for a week or two, and just decompress.

    I've never actually heard of this project before (anything involving video with my hardware/internet connection = not going to work lol) but I can easily see that it's up near the top in terms of competence and execution at what it does and the niche it fills. The resiliency that comes with that will probably take more than a week or two to put a dent in, FWIW.

    • exikyut 2 years ago

      Was writing the above while a tiny bit distracted, and nicely managed to forget the half of the sentence that actually made my point lol. I'd specifically left Xvnc running with `-securitytypes=none` (no password :D), an option I always use locally (WPA2-PSK et al) whenever I spin up Xvnc instances (which I do frequently). So I copied that (comfortable, friction-reducing default) behavior to the pUbLiC InTeRnEt because "quick thing that I'll be tearing down in 5 minutes". And then of course the thing stayed up for a while because I mis-estimated. Yeah. Lesson learned.

      And this article has actually got me wondering about the wisdom of just segregating everything, BeyondCorp style, even in home LAN situations. The challenge there would be to achieve a comfort zone that makes it hard for lazy-me to make mistakes and which doesn't introduce friction that makes me want to throw up my hands and just mush everything onto the same network switch. I suspect that would probably wind up being an emergent, site-specific DevOps problem and a case of modeling the humans, not the computers (which in a home/lab setting are just prototyping tools).

      • castillar76 2 years ago

        It's for this reason I've been moving a great deal of my stuff over to a private ZeroTier network (Tailscale is also getting good reviews and on my list to look at). The stuff I trust is part of the ZT network and can access things on it from anywhere; I then don't have to expose stuff to the Internet at all.

        I still have to do some figuring about how to expose services I want to expose; I may create a second ZT network as a "DMZ" and then use an intermediary host as a choke point to filter and screen traffic.

      • aaaaaaaaata 2 years ago

        Still kinda goofy to bring up due to the automated attacks that don't take "getting lost in time" to happen — they happen as soon as you're open to the public..

jka 2 years ago

One of the largest risks of project-owner compromise to everyday users and businesses would, I think, be from widely used software where automated updates occur.

That leads to an argument for updates being performed manually after inspection of the changes involved.

Counter-arguments could include:

- Users will not care to see what has changed in an update

- Security updates are important to roll out immediately

Responses to those could include:

- Automated update rollout to the majority of users could be conditional on a smaller, inspective subset community of users manually examining and approving the update first (not too dissimilar to a Quality Assurance process). In the context of project owner compromise like the example in the article, this should catch the issue and prevent rollout to users. If an update is approved "with concerns", then the review community is likely to share those concerns with a wider audience, leading to awareness and hopefully resolution.

- Security updates could be rolled out more quickly -- but with a requirement for sign-off by multiple security-focused engineers and product specialists. That could help to reduce exploit exposure time for users while providing for adequate review of changes (security fixes can, in themselves, be challenging to review and confirm).

Also potentially relevant to this topic: how would a community that uses proprietary software develop confidence in an update before choosing to apply it locally?

  • ryukafalz 2 years ago

    > Automated update rollout to the majority of users could be conditional on a smaller, inspective subset community of users manually examining and approving the update first (not too dissimilar to a Quality Assurance process). In the context of project owner compromise like the example in the article, this should catch the issue and prevent rollout to users. If an update is approved "with concerns", then the review community is likely to share those concerns with a wider audience, leading to awareness and hopefully resolution.

    This is a Linux distribution, and the users paying more attention are the contributors or package maintainers. The trouble is this doesn’t scale with the sheer volume of software we use today. You don’t see many distros trying to individually package everything they use from NPM for example, and for good reason.

    I’m convinced that the only way around this in the long term is for our software to actually run with least privilege, including the libraries within applications.

    https://medium.com/agoric/pola-would-have-prevented-the-even...

acatton 2 years ago

> a key-infra open source project

then proceeds to mention a project which is not officially packaged/distributed by any of the major distributions.

bayesian_horse 2 years ago

In my opinion there is not a lot of difference between a vulnerability that is introduced intentionally and one that is introduced unintentionally regarding their "life cycle".

Trust is always relative. Just as in commercial software, trust in the original authors is never total and can only grow with continuous verification and non-exploitation.

WesolyKubeczek 2 years ago

Dear Author/Maintainer,

Why do you even accept giant patches which you can’t review?!

  • jka 2 years ago

    It's a good question to ask, although in the case of this project, there don't seem to be any merged code reviews with large line diffs?

    • WesolyKubeczek 2 years ago

      Makes the story even murkier.

      I don’t know what happened there, but the issue linked sets off so many alarm bells that all I want is to run away.

      • bogwog 2 years ago

        I think the issue in the repo explains it pretty well: the maintainer's machines were compromised, meaning that any commit under his name could have come from an attacker. It doesn't matter how big or small the diffs are when the person in control of the repo is malicious. Even signed commits would not be safe from this unless he was using a physical signing key (e.g. a yubikey)

        • jka 2 years ago

          Perhaps I'm sorta nitpicking or being overly annoying by pointing this out, but use of a Yubikey wouldn't necessarily help if a malicious process modified the code to be submitted. They'd be signed-but-corrupted commits.

          The best safety net I'm aware of is that source code and/or diffs in plaintext (not binaries) are distributed, and that recipients inspect them.

          It's reasonable to distribute binaries in addition, although recipients take on risk by using those. That risk can be mitigated in the presence of source code and reproducible builds.

      • jka 2 years ago

        Despite my unbounded sense of optimism about everything in life, something makes me think there might be more situations like this ahead (it's a sign that software is being written and distributed more widely, which is good - and a sign that running tons of arbitrary code on all our machines and internetworking them all might lead to difficult-to-answer questions around compromise).

        So to harness that optimism again: it's a challenge and an opportunity to build the correct collaboration and analytical tools to audit codebases quickly and effectively in a distributed environment.

  • jesboat 2 years ago

    My interpretation was that the author meant "even if we tried to review all changes between current HEAD and the last known-safe commit, it would be difficult if there were large commits (e.g. a giant refactor.)"

    • WesolyKubeczek 2 years ago

      Then just fuck it, reset to last known good commit and work from there. If a contributor gives a shit about their PR, they will resubmit it again. If not, good riddance. If something has to be reimplemented again, it's usually way better the second time around anyway.

  • WesolyKubeczek 2 years ago

    Okay, if the account has been compromised: why don’t you keep a known-good offsite backup?

  • allcomx 2 years ago
    • lionkor 2 years ago

      Or just don't accept large patches, tell authors to split them into nice commits, and have others review it with you.

      The people who'll complain about "gatekeeping" (if any) are likely not contributors you'd want, anyways.

_wldu 2 years ago

Everyone should PGP sign their git commits with a secret key stored on a YubiKey. Make small changes to your code, read the diff, then commit and sign before pushing to the repo. IMPO, that's really the only way to protect the integrity of source code.

If you are adding large changes without carefully reading the diffs and you do not sign the commits it's just a matter of time.

  • andrewstuart2 2 years ago

    You don't even need that for historical integrity checks either. As long as the commit history exists in two places, it's trivial to detect when one has changed, due to the merkle tree structure of the commit log.

    Signed commits are a nice way of also sealing that history and validating authorship or approval of some sort.

  • jsiepkes 2 years ago

    Would be nice if you could also use a timestamping service when signing your commit. That way even if your key is compromised you can still spot things that were signed after the compromise.

0xbadcafebee 2 years ago

Oof, not a great situation. I hope the devs can do an audit and confirm their code looks good. The C code and models are the only thing that needs scrutiny.

However, if someone wanted to use this code immediately they could run it in a qemu VM and forward a port or something.

Gordonjcp 2 years ago

It's a bit unclear as to what's going on there.

Is the codebase itself compromised? Did the developer's computer get compromised?

Did one of the external libraries that it pulls in from git get compromised?

  • lionkor 2 years ago

    Possibly the codebase and the binaries, apparently. The author seems not to know, either. If I was the author, I'd buy a CO detector. His comments are a bit off.

  • RektBoy 2 years ago

    Also what is the reason behind packing open-source project with UPX ? https://github.com/lawl/NoiseTorch/issues/254#issuecomment-1...

    Also this is very good example, why everyone should use VM for browsing internet, or better yet - different machine. If you're popular enough, there alway be people who will use that $1mil RCE on you.

    • easrng 2 years ago

      Smaller binaries, I assume.

  • ale42 2 years ago

    Looks like the dev's computer got compromised, maybe by someone looking for bitcoin wallets. See here: https://github.com/lawl/NoiseTorch/issues/254

    • eole666 2 years ago

      "Found a very suspicious process in htop. Paniced. Later straced it and it was looking for wallet.dat. The OS itself was fairly fresh (q3 ish?). Sorry i dont think i have any logs or anything that isnt deleted. As you may see from my history, i paniced fairly hard. "

      Well, it seems the maintainer panicked hard really quickly, assumed NoiseTorch was compromised, and deleted everything on his computer that could help knowing if it was actually the case...

ushakov 2 years ago

from license:

> This program comes with ABSOLUTELY NO WARRANTY

when it says no warranty, they mean it