mwmanning 5 years ago

Hey since this is blown up I just want to address it directly.

I take responsibility for what happened here. My RubyGems.org account was using an insecure, reused password that has leaked to the internet in other breaches.

I made that account probably over 10 years ago, so it predated my use of password managers and I haven't used it much lately, so I didn't catch it in a 1password audit or anything.

Sometimes we miss things despite our best efforts.

Rotate your passwords, kids.

  • donkeyd 5 years ago

    Wow, that's a pretty well executed and possibly targeted attack then. It blows my mind how easy it can be to perform a high impact attack by abusing popular libraries. Hopefully this was caught before it got into production in high profile implementations.

    • mwmanning 5 years ago

      Yeah I'm assuming the methodology is:

      1) Find high-value target libraries

      2) Grab the usernames of accounts with push access

      3) Check those against password dumps

      I feel really stupid about this, but like I said it was an oversight. I apologize and will try to do better.

      • 0x0 5 years ago

        Sounds like rubygems and other registries like npm should try to get ahold of those password dumps and check them against their own account databases somewhat frequently!

        • dspillett 5 years ago

          If you find a reused password, how do you let the user know though? If I got a "your account is vulnerable" message I'd ignore it as junk like all the other ones I get pretty much daily. You could force a change next time the user logs to your interactive interface, but many users won't do that for some time.

          The best approach is probably to disable the account completely until an interactive login is made and a password reset can be forced but some would be up in arms about the inconvenience caused: you can't just allow a simple reset as the login could be coming from an attacker not the original user, an extra channel will need to be used to verify the identity. You might just have to leave the account locked forever and expect the user to create a new one - but now you have the old account and its content which may be used as a dependency of many projects which now break, unnecessarily if there hasn't been a login by a nefarious type.

          • yebyen 5 years ago

            You could send that notification, invalidate any client tokens, and also disable the compromised password forcing the user to re-authenticate through their email address, a-la password reset, and I guess also verify they aren't using the same password again.

            You wouldn't lock the account forever, the point is to establish that the person whose password was compromised knows, that the password is not the only factor which is used to regain access to the account, and to ensure that your service (rubygems) and its downstream users are not compromised as well as a result of the breach.

            Any groaning about the inconvenience caused by disabling account access until the password is changed, can be simply shrugged away in favor of security concerns, with a link to this story about rest-client.

            By the time you have learned the user's plaintext password, their account may already have been compromised. There's a case to make that you disable all downloads of any gems that might be compromised from the account until you've verified they aren't. That might be over the top, especially for popular projects as now we are talking serious inconvenience affecting potentially thousands or more of downstreams.

            It's a sticky situation, since you don't really know how long that password has been in the open for hackers to use and abuse once you've discovered it in a password dump.

          • josephwegner 5 years ago

            Heroku did this about a year ago. They have a list of known pwned passwords (probably haveibeenpwned, but honestly I'm not sure), and disallow accounts to use those passwords. When that change was implemented, any account using a pwned password had that password expired.

            https://status.heroku.com/incidents/1625

            (source: I work for Heroku Support)

          • marcus_holmes 5 years ago

            If a gem maintainer is re-using a known-compromised password they have absolutely zero right to be annoyed at the "inconvenience" of having to reset their password to something that isn't compromised.

            RubyGems has a responsibility to its users and community here. It (like npm) needs to take this stuff seriously.

          • ecnahc515 5 years ago

            In other sites I know that actually implement this, they simply lock your account/force a reset so you can't login with the existing credentials.

          • eli 5 years ago

            Presumably you'd use whatever procedure you use for a lost password?

            But simply forcing a password change at the next login after detecting an insecure password would not unduly burden anyone and would be better than doing nothing.

          • tinus_hn 5 years ago

            > but some would be up in arms about the inconvenience caused

            Sometimes you have to have your priorities straight. If you found the password, someone else can find it.

          • llamataboot 5 years ago

            Glassdoor emailed me this week with such an email. We found that your password was leaked, we have disabled your account and signed you out of all devices, you need to create a new password to login.

        • kevin_thibedeau 5 years ago

          That's not very practical if salted hashes are being stored.

          • marcus_holmes 5 years ago

            but the salt is in the database. You can hash all the known-compromised passwords with the salt and see if any match.

            • stevenwliao 5 years ago

              The salts should be different for each user, specifically to deter brute forcing of this nature.

              The only time you would have access is when the user logs in, so for rarely logged in users you would have to proactively reset their password or cross your fingers.

              • lamontcg 5 years ago

                Hopefully you don't transmit the password and are doing challenge/response so that you don't even have it when the user logs in.

                But even with 12 round bcrypt hashing, you should be able to fairly cheaply attack a list of 2,000 bcrypted passwords with a million-entry database of leaked e-mail/password combos in a GPU-month.

                Probably easier to force a password reset on everyone and then do the checking on password change, although you need to be careful there not to be sending the password.

                EDIT: uhm, wait, so if you've got the e-mail address in the dump then there's only one user for that, so just grab their salt and hash the password and check it. So that million entry database should be checkable in a bit over half an hour...

        • febeling 5 years ago

          Make 2FA mandatory. Apple did just that recently with their app stores. Someone authoring libraries should be able to handle that.

      • OskarS 5 years ago

        It happens. You've taken reasonable precautions to safeguard your online identity, which is all one can really ask. Sometimes things slip through the cracks. The hacker is to blame, not you.

        The larger question is about if gem/npm/cargo-style package managers are such a terrific idea in the long run. The security implications are pretty serious.

    • dspillett 5 years ago

      I doubt the initial attack was targetted. That would have been a brute force testing-known-passwords-against-similarly-named-accounts. Once a useful account was found it could well have been sold on the appropriate black market rather than the finder using it themselves.

  • vorpalhex 5 years ago

    It can happen to us all.

  • tempguy9999 5 years ago

    > I take responsibility for what happened here

    that's.... rare. Well done.

ageitgey 5 years ago

It looks pretty bad if you had deployed this :(

Here is a summary of the exploit re-pasted from a great comment [1] written by @JanDintel on the github thread:

- It sent the URL of the infected host to the attacker.

- It sent the environment variables of the infected host to the attacker. Depending on your set-up this can include credentials of services that you use e.g. database, payment service provider.

- It allowed to eval Ruby code on the infected host. Attacker needed to send a signed (using the attacker’s own key) cookie with the Ruby code to run.

- It overloaded the #authenticate method on the Identity class. Every time the method gets called it will send the email/password to the attacker. However I'm unsure which libraries use the Identity class though, maybe someone else knows?

So... it potentially comprised your user's passwords AND (if you were on Heroku or similar like many Rails apps are) system-level access to all your attached data stores. About as bad as it gets.

[1] https://github.com/rest-client/rest-client/issues/713#issuec...

  • flixic 5 years ago

    Not only the data could be accessed, it's entirely possible it was modified. Unless you have good logging of all data changes (at DB level) it can be very difficult to detect these changes.

    The first hijacked version was released on August 13th.

sersi 5 years ago

I think that rubygems should consider automatically enforcing multifactor authentication for popular gems.

So any gem with more than 50,000 downloads should force to gem maintainer to have MFA set up before they can publish a new version or do anything with that gem.

Because, having MFA is not about protecting gem maintainers, it's about protecting users. So, gem maintainers should not be allowed to be careless with security by not using MFA. It's not their choice to make.

  • bpicolo 5 years ago

    Or just any gem, period, regardless of download count? Seems like a fair minimum baseline for publishing to a public repository. MFA in modern times is low-effort.

    • krferriter 5 years ago

      Yeah should be for all gems, and same goes for npm and pip registries. If you're publishing code to an official public registry for pulling and execution by other users in the ecosystem, multifactor authentication should be required. Agreed that the effort required to add this nowadays is fairly small, such that it should be more widespread.

  • raesene9 5 years ago

    It's not just rubygems that has this issue, it's all the other repos too, most of which (AFAIK) don't enforce 2FA

    Also worth noting that whilst MFA helps, it's not a panacea as MFA isn't generally compatible with automated CI/CD processes, so API keys will still be required, and can be leaked/lost/stolen.

    • andrewaylett 5 years ago

      How about a split, so releases can be uploaded by API key, but only published by someone using MFA?

    • cooljacob204 5 years ago

      Why would a CI/CD need to permissions modify and commit code to a repo?

      • raesene9 5 years ago

        Say your CI pipeline runs automated tests, builds the gem and pushes to Rubygems, it needs permissions to push to Rubygems.

        So if an attacker compromises the API key used by that pipeline, they get the rights to push to Rubygems.

        • enneff 5 years ago

          Actually publishing new versions seems like something that happens infrequently enough that it would be fine to require a manual auth to complete it. Given the potential risks it seems prudent.

          • raesene9 5 years ago

            I'd expect that very much depends on the software in question. As one example that I'm aware of, gvisor from Google delivers nightly builds. So they're building and pushing every single day.

            It'd depend on the individual software library and of course as a consumer of many libraries you generally will have limited or no visibility of the practices of all your dependencies.

            • rubber_duck 5 years ago

              I don't think nightly builds are the same thing as releases - you can have CI publish a build but to create a versioned public release it should require manual auth.

              • raesene9 5 years ago

                That's a view of course (although in the case of gVisor they don't actually do versioned build just nightlies) but here's a question.

                As a consumer of software libraries, have you ever looked into the security practices of the library author before choosing whether to use it or not?

                • ryanbrunner 5 years ago

                  This is becoming increasingly impractical for certain ecosystems as packages depend on packages, which depend on packages, etc. It's less of a problem with the bundler ecosystem, where larger packages with relatively few dependencies is more standard, but in the JS world installing a package means that you're likely installing tens or hundreds of sub-dependencies.

                • rubber_duck 5 years ago

                  I did review two small libs I was pulling that had very few users (and froze the version) But that's just the thing - I will never be in a position to do that for more than a few libs - that's why the best I can do is rely on source enforcing good practices and community audits.

            • adrianN 5 years ago

              Every single day doesn't sound too often to have someone press a physical button.

          • zbentley 5 years ago

            Absolutely agree. One-click deploys are nice, but in some circumstances not worth the cost. The manual component can also be polished to the point where it's not that arduous (think github login with 2FA and clicking the "merge" button).

          • fahrradflucht 5 years ago

            The argument for automated releases is always that it is quite easy to do silly mistakes like forgetting a build step while doing manual releases. This also happened to high profile npm packages in the past and people complained that there was no automation.

            • psychrometer 5 years ago

              I think they are suggesting a mostly automated build process and then a manual entering of a the second factor OTP at the deploy stage. The entering of the 2FA isn't something that will be hard to easy to mess up and you still receive the benefits of automated builds for the most part.

        • C4stor 5 years ago

          I doesn't seem hard to imagine a CI pipeline that sends an email/sms for validation and waits an affirmative answer before continuing. I have no idea how to receive texts on a server, but sending and receiving emails is super easy.

    • sersi 5 years ago

      Yes, I agree, it's not only rubygems, npm had the same problem not so long ago and it is a general issue in all repos.

      One thing that would be inconvenient but would protect against that would be to have the api work as usual, but need to use MFA and login to the website to approve a new release (and have information there listing the ip and time of upload). That would only make sense for heavily used gems like this one but it seems that it would stop most issues?

      • raesene9 5 years ago

        It would help for sure, I'd guess the problem would be , will the maintainers of all these libraries be happy with the overhead that this would introduce.

        A less invasive control might be to notify all owners when a new version is pushed, so they would be aware of a risk, if they weren't expecting a new release. Not perfect, but something.

  • vesinisa 5 years ago

    It's only a matter of time before this will happen. Once any one of the big players (rubygems, npm, Maven central, PyPi) enforces 2FA, all other repos will soon have to follow suit or risk giving appearance of haphazard attitude towards user security.

    2FA is generally trivial for maintainers to take into use. There is simply no excuse to not require it at this point for all new uploads. The status quo of hoping maintainers never re-use their passwords / use weak passwords / have their machines hacked is clearly not working since security incidents like this keep happening every other week with Rubygems/npm/etc.

    • acegopher 5 years ago

      PyPi has 2FA for user logins to the website, which:

      "safeguards against malicious changes to project ownership, deletion of old releases, and account takeovers. Package uploads will continue to work without users providing 2FA codes."

      They are also working to enforce 2FA on uploads:

      "But that's just for now. We are working on implementing per-user API keys as an alternative form of multifactor authentication in the setuptools/twine/PyPI auth flows. These will be application-specific tokens scoped to individual users/projects, so that users will be able to use token-based logins to better secure uploads. And we'll move on to working on an advanced audit trail of sensitive user actions, plus improvements to accessibility and localization for PyPI. More details are in our progress reports."

      From: http://pyfound.blogspot.com/2019/06/pypi-now-supports-two-fa...

  • holtalanm 5 years ago

    i turned on 2fa for my npm account, and my published libraries aren't even at 1K downloads _total_ yet.

    these guys with libraries that have a massive user base really have no excuse besides laziness for not having 2fa on their account.

elaus 5 years ago

It's mind-boggling to think how fragile and potentially dangerous those dependency ecosystems are – no matter if it's Ruby, JS, PHP or other languages widely used for web apps.

We all just hope that nothing bad will happen or that it will be noticed fast enough. Accounts get compromised, maintainers quit and transfer their project, bad actors might even pay the dev of some lesser-known dependency…

I have no easy solution for this problem and of course I too use external dependencies in my projects – but it feels like it's only a matter of time till disaster will happen and most of us just ignore this problem till then.

  • hombre_fatal 5 years ago

    The first thing that needs to become standard is a source code / diff viewer right there inline on the repository.

    One of the most ridiculous things about most package repositories (npm, rubygems) is how opaque they are. It's only a mere courtesy to link to the github repo from a package, and a gentlemen's agreement that it actually represents the code that will get run with 'npm/gem install'. There are various ways to go about this like the package repo requiring linkage to an actually git repo that it builds from.

    Commonly pitched solutions like 2FA are useful but don't do anything to stop the case of a malicious actor actually having publish rights, like the trivial attack where you simply offer to take a project off someone's hands. But diffing releases and reading source code should be absolutely trivial at the very least.

  • raesene9 5 years ago

    Yeah this problem has been a known issue for a long time, but there really is no easy fix, and incentives are stacked against it getting resolved in any meaningful way.

    I did a talk for AppsecEU back in 2015 on this topic and found good material talking about it as a risk going back years before that...

    • 19ylram49 5 years ago

      I think I might start using fully isolated environments (i.e., via Vagrant/Docker/etc.) for all of my projects from now on (I already do for some).

      • jcoby 5 years ago

        That approach will mitigate your machine getting compromised (which is good) but it won't fix your production machines getting compromised if the gem or package gets deployed. That is usually a much worse outcome.

        And even in isolated environments I find myself running code outside of the container for testing. Usually a quick script to test some package's functionality or opening a REPL to run something or running a code-generator (manage.py, artisan, etc). That's all it takes for the malware to break out of the isolation and attack your machine.

    • jcoby 5 years ago

      > there really is no easy fix

      Is there any fix at all? Aside from something like multiple-account code signing/release verification I cannot think of something that couldn't be compromised in some way.

      At the end of the day you have to trust someone and trust that they trust someone else. The problem is you have no way of vetting the entire dependency chain. You may have reviewed gem/package A but you aren't going to (realistically) review all of its dependencies and those dependencies' dependencies.

      At this point it's all a "many eyes" approach. And it seems to be working relatively effectively.

      • raesene9 5 years ago

        I'd debate the relatively effective piece in light of webmin being backdoored for a year, and of course remember we're only hearing about the ones that have been found, not the ones that haven't.

        There's a number of possible technical mitigations, maintaining internal package repositories, code review of key libraries, enforcing package signing and checking signatures etc but all of them increase costs and decrease development speed, so they're not adopted that heavily.

        There are also possible mitigations at a legislative/policy level, but they would be so deeply unpopular that I'm sure they'd never pass muster in most countries.

        • deckard1 5 years ago

          Linus's Law (given enough eyeballs, all bugs are shallow) has never rang true to me. I've been using free software since '95.

          Taken at face value, then yes, obviously more people looking at a specific piece of code will make it better. But this does not extend to the entire landscape of open source. Most developers, especially ones doing so as a hobby, would much rather work on their own new code. Not look at someone else's old boring code. We would rather reinvent the wheel a thousand times before touching a line of code written by someone else.

          This becomes even more dire when you look at code no one wants to touch. Like TLS. There were the Heartbleed and goto fail bugs which existed for, IIRC, a few years before they were discovered. Not surprising, because TLS code is generally some of the worst code on the planet to stare at all day.

romaaeterna 5 years ago

Ruby gem hijacking also happened to Strong_password a few weeks ago.

https://news.ycombinator.com/item?id=20377136

This is major new line of attack, and web app infrastructure is critically weak to it. We rejected distro-controlled package management in favor of pip and gem and npm years ago (for good reasons), but as this sort of attack becomes much more common (which it will), we might find ourselves missing the days of strong central control.

Rubygems should have acted on the Strong_password news, but missed the opportunity. I hope that they can get their act together now that they are lucky enough to have a second chance before this style of attack really explodes.

  • raesene9 5 years ago

    With regards to Rubygems, the challenge is likely to be where they get the resources for additional security measures.

    All the major language repo's are free at point of use, and I don't get the impression the maintainers are exactly rolling in money, so it doesn't seem likely that they can easily ramp up on that front.

  • majewsky 5 years ago

    > We rejected distro-controlled package management in favor of pip and gem and npm years ago (for good reasons)

    While I'm generally a fan of distribution-provided packages, they would not have helped in this case. Distributions simply lack the manpower to audit all upstream releases for these kinds of issues.

    • hyperpape 5 years ago

      This gem was published six days before it was found, which means that the effectiveness of the attack seems to have relied on it being picked up by people doing automatic upgrades. Wouldn't a distro help because it fundamentally is less predictable about when it takes a new version?

robotfelix 5 years ago

It's worth noting that the hijacker pushed a malicious version of 1.6.x

Version 1.7.0 was released to rubygems on 8th July 2014, and 2.0.0 on 2nd July 2016, so anyone who has started using rest-client or run a `bundle update` recently is unlikely to be affected.

The impact could have been significantly greater had the hijacker pushed a new versions of 1.8.x or 2.x as well, so it's very fortunate the breach was spotted now.

  • rcfox 5 years ago

    That's a good point. Could indicate a targeted attack?

    • Implicated 5 years ago

      That was my first thought - it would seem as though releasing on a version that old would be deliberate and why else would you do that if you weren't targeting something specific?

  • smudgymcscmudge 5 years ago

    Don’t most ruby projects vendor gems? So projects using 1.6 that didn’t bundle update wouldn’t be affected either. It sounds like only projects that were pinned to 1.6 that ran bundle update would be affected.

    I only have a passing familiarity with ruby gems, so I may be completely wrong.

  • beilabs 5 years ago

    Yeah; I just grepped all of my own repos, it's quite out of date thankfully.

    This situation definitely lends itself to push for 2FA by default on all rubygems.

shioyama 5 years ago

I'm an author of gems with a total of more than 4 million downloads. I just setup 2fa now after seeing this.

  • shioyama 5 years ago

    I actually ended up going further and removing me as author from gems I don't actually maintain anymore, which brings down my exposure considerably. It's actually not as easy as it could be to remove yourself as an author from a gem (have to do it via command line).

    • rcfox 5 years ago

      What happens to orphaned gems? It seems like someone could make the case for getting ownership of them much easier than if you kept ownership and added a loud deprecation warning.

      • shioyama 5 years ago

        I wasn't the only author on those gems, the others are (mostly) still active. Not sure how orphaned gems are handled, though.

abarringer 5 years ago

Whitelist outbound access from your network. It's not perfect and it can be painful to deploy, particularly gems, but it stops several categories of attacks.

raesene9 5 years ago

Following closely on from Webmin's compromised CI/CD pipeline, this is another instance of the growing problem of supply chain attacks.

With the software supply chain being as complex as it is, and the large number of moving parts, we're only going to see more of these ...

raesene9 5 years ago

For all the people who are rightly concerned about these attacks here's a a couple of questions

- Would you pay money for access to a package repository that had good security practices? How much would you pay? Would you accept delays in library updates to allow for security checks? If so, how long a delay would be acceptable.

- Have you ever looked into the security practices of open source applications or libraries that you wanted to use and had the information you did or did not find affect your decision to use that software?

- How often do you use the inventory of all the libararies you have, to periodically check on the provenance of those libaries and that they are maintaining good security practices?

Ultimately these problems (like most) are one of incentives. It's very easy to build software very quickly using the huge number of open source components that are freely available.

Whilst speed of development and price are the primary considerations, it's not going to be surprising that security takes a lower rung on the ladder.

  • sundayedition 5 years ago

    I used to work for the government and we had a private gem server that introduced much for what you're asking about; delays in releases as a trade-off for security practices. Ultimately, I don't think there was any tooling available to automate the vulnerability scanning enough that a private server gives you much of a jump start on any kind of CVE that bundle audit wouldn't already catch. The workaround of needing to hit GitHub directly to get a current version and bypass the private server is also available.

    Additionally, GitHub provides a bundler-audit like service for free. And, identifying those that are t CVEs yet seems like something scriptable but also obfuscatable (on the attacker end).

    I don't think any team I've been on (federal or private) would pay extra for this kind of service given the frequency with which we do updates and the amount of effort that needs a careful developer spends when doing these updates. The most recent breaches have been noted well I'm advance of any updates we would have done.

    I'd be glad to be wrong about it because I think a few more tools in this space would only help the community.

    It would be nice if a service could summarize the differences in actual gem releases, to make things like the changelog and the diff easily digestable for all the available updates though (versus a scan/lint). That would let a developers more easily identify these kinds of breaches than cloning and diffing.

  • whorleater 5 years ago

    Half of these is the Red Hat business model.

notyourday 5 years ago

This actually is a perfect illustration why in production all of your systems have to go through a while-listing proxy rather than a NAT for the outside access.

There should be a very limited number of known external URLs that your production system needs to hit. Whitelist them on a proxy. Block the rest. Dump the blocked requests into a log. Put alerts on a log. It will get most of data exfiltration attempts or attacks such as this. Remember, when your goal is not to have a perfect security -- your goal is to have a better security than someone else so that someone else gets to be a chump and not you.

kpeekhn 5 years ago

Is there a way to check if a gem was released by an account using MFA?

If there was a "published with mfa" flag on every gem release and it would allow a Bundler setting to block installing gems without 2FA.

Of course, this would also help attackers find targets. But maybe its worth the trade-off?

  • hombre_fatal 5 years ago

    Seems like a pointless, false sense of security.

    What about all the attacks where the malicious actor is someone with publish rights, like friendly package takeover? Your proposal makes that even more effective since now the attacker gets a nice "published with mfa" badge.

    • kpeekhn 5 years ago

      It would have prevented this attack, so I'm not sure how its pointless. Obviously it doesn't fix everything. MFA is MFA. I don't know why anyone would take it as a guarantee that some third-party has audited all the code.

      • hedora 5 years ago

        I don’t see how it would have prevented this attack. It sounds like this was an old, semi-forgotten account (with a old password), so the attacker could have simply enabled 2FA, pushed the gems, and then disabled 2FA again.

      • hombre_fatal 5 years ago

        > It would have prevented this attack

        Your post was about a "published with mfa" vanity badge which I was responding to, not the merits of mfa in general.

        • kpeekhn 5 years ago

          sorry, that is not what I mean.

          I don't care really about a badge, I care about the information being available so that it can be used in Bundler. Its about developers being given the choice in their gemfile to disallow installation of any gems uploaded without 2FA. But in order to do that, we need rubygems to publish that information.

          • hombre_fatal 5 years ago

            No, I understand you. My point is that whether a package uses 2FA or not should have zero impact on your security practices, yet your proposal suggests otherwise.

            It doesn't seem like it does you much good to know if a package uses 2FA except to potentially weaken your defenses. For example, any scrutiny you level at a non-2FA package should also be leveled at 2FA-enabled packages. Though I suppose there is a non-zero benefit, so I won't belabor this argument any further.

            Perhaps package repositories should be nagging publishers to enable 2FA. Though poorly implemented 2FA also introduces new attack vectors like the "lol lost my phone" social engineering attack.

htns 5 years ago

Seems rather similar to the strong_password case from a month back: https://news.ycombinator.com/item?id=20377136 . I wonder if anyone has checked basic things like scanning all of rubygems for "pastebin" or "eval( * http * )".

  • gotts 5 years ago

    it surprises me a bit.

    I'm wondering why wouldn't RubyGems implement some basic form of malware detection? This type of code shouldn't be too hard to classify.

    • derimagia 5 years ago

      Malicious users would just change their code slightly to get past it. Use a different service than pastebin, or just obfuscating it more.

      • gotts 5 years ago

        After thinking about.. I think you must be right. Malware detection is not an easy task especially because of Ruby's dynamic nature.

        Even simple open(), sleep(), eval() could be easily obfuscated.

u801e 5 years ago

Does ruby gems allow for signing releases? For example, the maintainer could upload their public key and use their private key to sign a package release. Then the consumer could verify the signature via the public key. If the public key changes, then the consumer could be alerted to that fact.

  • shioyama 5 years ago

    Yes it does, and I do this with my gems, but it's not widely used and I'm sure virtually none of the users of the gems I author probably take advantage of it.

    https://guides.rubygems.org/security/

    > However, this method of securing gems is not widely used. It requires a number of manual steps on the part of the developer, and there is no well-established chain of trust for gem signing keys. Discussion of new signing models such as X509 and OpenPGP is going on in the rubygems-trust wiki, the RubyGems-Developers list and in IRC. The goal is to improve (or replace) the signing system so that it is easy for authors and transparent for users.

  • Ajedi32 5 years ago

    That's not really any better than 2FA against this specific threat.

    But to answer your question, yes there is a system for signing gems, though it's not widely used: https://guides.rubygems.org/security/

bdcravens 5 years ago

This is a fairly old version of the gem, but probably what attacker was going for (users who infrequently go to new versions, and avoiding those who watch most up to date version)

gmontard 5 years ago

That is sadly a good example of why relying on trusted and accountable API clients should be considered critical for business.

When consuming APIs and not thinking about this, we are only building technical debt and security issues for the future.

Today's example is really bad since it targets a well-used meta API open-source library, but how many of those issues are already present on hundreds of other obscure open-source API clients?

westoque 5 years ago

Out of curiosity, is there a legal way to go after people that do these? e.g., File a police report?

  • raesene9 5 years ago

    Unless the attacker had very poor OpSec, it would be hard to track them down, even assuming the relevant police force had the skills/manpower to do so.

    Then you get the delight of likely jurisdictional issues, if it turns out the attacker is not a resident of the same country as the victim that reported it.

    • jaclaz 5 years ago

      Only out of curiosity I did a quick WHOIS search, it is an Ukraine domain registered by a Polish company with its legal head offices in Belize (at an address where there is seemingly a courier/transport company).

      And the same address is linked to a Malta based company that appears on the "Panama Papers".

    • ashleyn 5 years ago

      It's also likely it was done by an intelligence agency.

      • raesene9 5 years ago

        Whilst I'm not an intelligence analyst, this seems like a pretty basic/untargeted attack for a nation state.

        It could be one, but it could just as easily be one of the many criminal gangs who used credential spraying to get access to accounts and then figure out what they can do with them afterwards.

      • mschuster91 5 years ago

        Not really. The attacker seemed to target bitcoin once again... surprising this shit is still profitable, though.

        • dspillett 5 years ago

          > surprising this shit is still profitable

          If the cost of the attack is as near to zero as makes no odds, any income is profit, whether it comes from being able to compromise bitcoin related accounts elsewhere, getting a miner to run on 00s of servers and/or 000s of clients, or getting other details to use in a "send me bitcoint and I will/won't X" blackmail. And if there is no income from the attack, the cost of trying is near zero.

t0mbstone 5 years ago

What's kind of interesting about this is the fact that it is able to just blindly dump environment variables.

For a long time, environment variables have been evangelized as the secure place to store credentials and things, but that just gives third party scripts a known place to look.

You could argue that might actually be more secure to store your secrets in a separate, custom config file that gets read into the rails app via an initializer or something.

pqdbr 5 years ago

Many of these attacks seem to use pastebin. I will add a hosts entry pointing pastebin to localhost in my production servers.

gotts 5 years ago

Many popular gems have multiple authors(with push ability) on RubyGems. Like 4, 5 authors, sometimes even more. It may look impressive on their profile but from a security standpoint that's 4x, 5x Attack surface from what it potentially could be.

jbverschoor 5 years ago

Good thing your Gemfile.lock is checked in and gems hosted on rubygems.org are frozen / imutable

willfiveash 5 years ago

Another case for universal adoption of 2FA.

hellothereyo 5 years ago

Why doesn't this repo mandate 2FA?