As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.
Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.
If you're an Electron developer (like the apps mentioned), I recommend:
* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.
* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.
* You probably want to rotate your certificates if you ever gave anyone else access.
* Lastly, you should probably be the only one with the keys to your update server.
How about we don't build an auto-updater? Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible. Touching files on a user's system should be treated as a rare special occurrence. If a server is involved with the app, build a stable interface and think long and hard about every change. Meticulously version and maintain everything. If a server is involved, it is completely unacceptable for a server-side change to break an existing user's local application unless it is impossible to avoid - it should be seen as an absolute last resort with an apology to affected customers (agree with OP on this one).
It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).
You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.
Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.
We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).
> Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible.
That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.
You're allowed to have a support matrix. You can refuse to support versions that are too old, but you can also just... let people keep using programs on their own computers.
I do agree with you but I think that unfortunately you are wrong on the job of updates. You have an idealistic vision that I share but well, it remains idealistic.
Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.
For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.
This is actually precisely how package management works in Linux today... you release new versions, package maintainers package and release them, while ensuring they actually work. This is a solve problem, it's just that nobody writing JavaScript is old enough to realize it's an option.
2. Because that requires you to know how to find the hash and add it.
Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.
Well, to be honest, the browsers could super easily solve that. In dev mode, just issue a warning "loaded script that has hash X but isn't statically defined. This is a huge security risk. Read more here" and that's it. Then you can just add the script, run the site, check the logs and add the hash, done.
Azure Trusted Signing is one of the best things Microsoft has done for app developers last year, I'm really happy with it. It's $9.99/month and open both to companies and individuals who can verify their identity (it used to only be companies). You really just call signtool.exe with a custom dll.
The big limitation with Azure Trusted Signing is that your organization needs to be at least 3 years old. Seems to be a weird case where developers that could benefit from this solution are pushed towards doing something else, with no big reason to switch back later.
That limitation should go away when Trusted Signing graduates from preview to GA. The current limitation is because the CA rules say you must perform identity validation of the requester for orgs younger than 3 years old, which Microsoft isn't set up for yet.
I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.
I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?
Not sure if this is FUD spread by the EV CA's or not though?
You are correct that an EV (Extended Validation) code signing certificate is required for signing Windows kernel-mode drivers. (learn.microsoft.com) For regular user-space applications, an EV certificate helps improve reputation with Microsoft SmartScreen, reducing security warnings when users download and install the software. (ssl.com)
However, some developers have reported that using an EV certificate does not always immediately eliminate SmartScreen warnings. (stackoverflow.com) So, while an EV certificate can accelerate reputation building, it does not guarantee the instant removal of SmartScreen warnings.
And yet, tons of developers install github apps that ask for full permissions to control all repos and can therefore do to same things to every dev usings those services.
github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.
IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.
As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"
Why spend that effort when any code you run on your machine (such as dependency post-install scripts, or the dependencies themselves!) can just run `gh auth token` can grab a token for all the code you push up.
By design, the gh cli wants write access to everything on github you can access.
I personally haven't worked with many of the github apps that you seem to refer to but the few that I've used are only limited to access the specific repositories that I give and within those repositories their access control is scoped as well. I figured this is all stuff that can be controlled on Github's side. Am I mistaken?
This vulnerability was genuinely embarrassing, and I'm sorry we let it happen. After thorough internal and third-party audits, we've fundamentally restructured our security practices to ensure this scenario can't recur. Full details are covered in the linked write-up. Special thanks to Eva for responsibly reporting this.
> We resolved the vulnerability within 26 hours of its initial report, and additional security audits were completed by February 2025.
After reading the vulnerability report, I am impressed at how quickly you guys jumped on the fix, so kudos. Did the security audit lead to any significant remediation work? If you weren't following PoLP, I wonder what else may have been overlooked?
That was solid. Nice way to handle a direct personal judgement!
Not your first rodeo.
Another way is to avoid absolutes and ultimatums as aggressively as one should avoid personal judgements.
Better phrased as: "we did our best to prevent this scenario from happening again.
Fact is it just could happen! Nobody likes that reality, and overall when we think about all this stuff, networked computing is a sad state of affairs..
Best to just be 100 percent real about it all, if you ask me.
At the very least people won't nail you on little things, which leaves you something you may trade on when a big thing happens.
And yeah, this is unsolicited and worth exactly what you paid. Was just sharing where I ended up on these things in case it helps
This is the wrong response, because that means that the learning would be lost. The security community didn't want that to happen when one of the CA's got a vulnerability, we do not want it to happen to other companies. We want companies to succeed and get better, being shameful doesn't help towards that. Learning the right lessons does, and resigning means that you are learning the wrong ones.
> If you get a slap on the wrist, do you learn? No, you play it down.
Except Dave didn't play it down. He's literally taking responsibility for a situation that could have resulted in significantly worse consequences.
Instead of saying, "nothing bad happened, let's move on," he, and by extension his company, have worked to remedy the issue, do a write up on it, disclose of the issue and its impact to users, and publicly apologize and hold themselves accountable. That right there is textbook engineering ethics 101 being followed.
I suggest reading one or two of Sydney Dekker’s books, which are a pretty comprehensive takedown of this idea. If an organization punishes mistakes, mistakes get hidden, covered up, and no less frequent.
Under what theory of psychology are you operating? This is along the same lines as the theory that punishment is an effective deterrent of crime, which we know isn’t true from experience.
> While I think that resigning is stupid here, asserting that "punishment doesn't deter crime" is just absurd. It does!
Punishment does not deter crime. The threat of punishment does to a degree.
IOW, most people will be unaware of a person being sent to prison for years until and unless they have committed a similar offense. But everyone is aware of repercussions possible should they violate known criminal laws.
Honestly I don't get why people are hating this response so much.
Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.
> we've fundamentally restructured our security practices to ensure this scenario can't recur
People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".
To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.
So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).
Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.
Annual pen tests are great, but what are you doing to actually improve the engineering design process that failed to identify this gap? How can you possibly claim to be confident this won't happen again unless you myopically focus on this single bug, which itself is a symptom of a larger design problem.
These kinds of "never happen again" statements never age well, and make no sense to even put forward.
A more pragmatic response might look like: something similar can and probably will happen again, just like any other bugs. Here are the engineering standards we use ..., here is how they compare to our peers our size ..., here are our goals with it ..., here is how we know when to improve it...
With privileged access, the attackers can tamper with the evidence for repudiation, so although I'd say "nothing in the logs" is acceptable, not everyone may. These two attack vectors are part of the STRIDE threat modeling approach.
Sounds like it was handled better than the authors last article where the Arc browser company initially didn't offer any bounty for a similar RCE, then awarded a paltry $2k after getting roasted, and finally bumped it up to $20k after getting roasted even more.
Well for one it was a gift so there is no valid contract right? There are no direct damages because there is nothing paid and nothing to refund. Wrt indirect damages, there's bound to be a disclaimer or two, at least at the app layer.
This is the second big attack found by this individual in what... 6 months? The previous exploit (which was in Arc browser), also leveraged a poorly configured firebase db: https://kibty.town/blog/arc/
So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.
Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.
I use firebase essentially for hobbyist projects for me and my friends.
If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.
> Google isn't to blame if you ship a paid product without running a security audit.
Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.
What if I need to hack together a POC for 3 people to look at.
It's my responsibility to make sure when we scale from 3 users to 30k users we take security seriously.
As my old auto shop teacher used to say, if you try to idiot proof something they'll build a better idiot.
Even if Google warns you in big bold print "YOU ARE DOING SOMETHING INSECURE", someone out there is going to click deploy anyway. You're arguing Google disable the deploy button, which I simply disagree with.
I think that's throwing the baby out with the bathwater; sane defaults are still an important thing to think about when developing a product. And for something as important as a database, which usually requires authentication or storing personal information, let your tutorials focus on these pain points instead of the promise of a database-driven app with only clientside code. It's awesome, but I think it deserves the notoriety for letting you shoot yourself in the foot and landing on the front page of HN. Author also found a similar exploit via Firebase for the Arc Browser[0]
Any purported expert who uses software without considering its security is simply negligent. I'm not sure why people are trying to spin this to avoid placing the blame on the negligent programmer(s).
Weak programmers do this to defend this group making crap software. I agree that defaults should be secure and maybe there should be request limit on admin, full access token - but then people will just create another token with full access and use it.
I don't think Firebase is really at fault here—the major issue they highlighted is that the deployment pipeline uploaded the compiled artifact to a shared bucket from a container that the user controlled. This doesn't have anything to do with firebase—it would have been just as impactful if the container building the code uploaded it to S3 from the buildbot.
Agreed. I recently stumbled upon the fact that even Hacker News is using Firebase for exposing an API for articles. Caution should be taken when writing server-side software in general.
The problem is that if there is a security incident, basically nobody cares except for some of us here. Normal people just ignore it. Until that changes, nothing you do will change the situation.
I always find unbelievable how we NEVER hold developers accountable.
Any "actual" Engineer would be (at least the one signing off, but in software developers never sign off anything - and maybe that's the problem).
> update: cursor (one of the affected customers) is giving me 50k USD for my efforts.
Kudos to cursor for compensating here. They aren't necessarily obliged to do so, but doing so demonstrates some level of commitment to security and community.
I'm a huge fan of the writing style. it's like hacking gonzo, but with literally 0 fluff. amazing work and an absolute delight to read from beginning to end
Capital letters aren't hard to use and help to make sentences stand out from each other properly. The overall style is good but the lowercase thing is obnoxious.
Obnoxious is a bit harsh - I liked the feeling it gave to the article, found it very readable and I had no trouble discerning sentences, especially with how they were broken up into paragraphs.
"i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload"
Just want to make sure I understand this. They made a hello world app and submitted it to todesktop with a post install script that opened a reverse shell on the todesktop build machine? Maybe I missed it but that shouldn't be possible. Build machine shouldn't have outbound open internet access right?? Didn't see that explained clearly but maybe I'm missing something or misunderstanding.
In what world do you have a machine which downloads source code to build it, but doesn't have outbound internet access so it can't download source code or build dependencies?
Like, effectively the "build machine" here is a locked down docker container that runs "git clone && npm build", right? How do you do either of those activities without outbound network access?
And outbound network access is enough on its own to create a reverse shell, even without any open inbound ports.
The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.
It's common, doesn't mean it's secure.
A lot of linux distros in their packaging will separate download (allows outbound to fetch dependencies), from build (no outside access).
Unfortunately, in some ecosystems, even downloading packages using the native package managers is unsafe because of postinstall scripts or equivalent.
Even if your builders are downloading dependencies on the fly, you can and should force that through an artifact repository (e.g. artifactory) you control. They shouldn't need arbitrary outbound Internet access. The builder needs a token injected with read-only pull permissions for a write-through cache and push permissions to the path it is currently building for. The only thing it needs to talk to is the artifactory instance.
If you don't network isolate your build tooling then how do you have any confidence that your inputs are what you believe them to be? I run my build tools in a network namespace with no connection to the outside world. The dependencies are whatever I explicitly checked into the repo or otherwise placed within the directory tree.
You don't have any confidence beyond what lockfiles give you (which is to say the npm postinstall scripts could be very impure, non-hermetic, and output random strings). But if you require users to vendor all their dependencies, fully isolate all network traffic during build, be perfectly pure and reproducible and hermetic, presumably use nix/bazel/etc... well, you won't have any users.
If you want a perfectly secure system with 0 users, it's pretty easy to build that.
Most banks and larger enterprises do exactly this. Devs don't get to go out and pick random libraries with out a code review and then it's placed on a local repository.
There are just far too many insecure and 'typo' malware to pull off the internet raw.
I'm not suggesting that a commercial service should require this. You asked "In what world do you have ..." and I'm pointing out that it's actually a fairly common practice. Particularly in any security conscious environment.
Anyone not doing it is cutting corners to save time, which to be clear isn't always a bad thing. There's nothing wrong if my small personal website doesn't have a network isolated fully reproducible build. On the other hand, any widely distributed binaries definitely should.
For example, I fully expect that my bank uses network isolated builds for their website. They are an absolutely massive target after all.
There are plenty of worlds that take security more seriously and practice defense in depth. Your response could use a little less hubris and a more genuinely inquisitive tone. Looks like others have already chimed in here but to respond to your (what feels like sarcasm) questions:
- You can have a submission process that accepts a package or downloads dependencies, and then passes it to another machine that is on an isolated network for code execution / build which then returns the built package and logs to the network facing machine for consumption.
Now sure if your build machine is still exposing everything on it to the user supplied code (instead of sandboxing the actual npm build/make/etc.. command) you could insert malicious code that zips up the whole filesystem, env vars, etc.. and exfiltrates them through your built app in this case snagging the secrets.
I don't disagree that the secrets on the build machine were the big miss, but I also think designing the build system differently could have helped.
You have to meet your users where they are. Your users are not using nix and bazel, they're using npm and typescript.
If your users are using bazel, it's easy to separate "download" from "build", but if you're meeting your users over here where cows aren't spherical, you can't take security that seriously.
The simple solution would be to check your node-modules folder into source control. Then your build machine wouldn’t need to download anything from anywhere except your repository.
Isn't it really common for build machines to have outbound internet access? Millions of developers use GitHub Actions for building artifacts and the public runners definitely have outbound internet access
Indeed, you can indeed punch out from an actions runner. Such a thing is probably against GitHub's ToS, but I've heard from my third cousin twice removed that his friend once ssh'ed out from an action to a bastion host, then used port forwarding to get herself a shell on the runner in order to debug a failing build.
So this friend escaped from the ephemeral container VM into the build host which happened to have a private SSH on it that allowed it to connect to a bastion host to... go back to the build host and debug a failed build that should be self-contained inside the container VM which they already had access in the first place by the means of, you know, running a build on it? Interesting.
A few decades ago, it was also really common to smoke. Common != good, github actions isn't a true build tool, it's an arbitrary code runtime platform with a few triggers tied to your github.
It is and regardless a few other commenters saying or hinting it isn't...it is. An air gapped build machine wouldn't work for most software built today.
Strange. How do things like Nix work then? The nix builders are network isolated. Most (all?) Gentoo packages can also be built without network access. That seems like it should cover a decent proportion of modern software.
Instances where an air gapped build machine doesn't work are examples of developer laziness, not bothering to properly document dependencies.
Ya too many people think it's a great idea to raw dog your ci/cd on the net and later get newspaper articles written about the data leak.
The number of packages that is malicious is high enough, then you have typo packages, and packages that get compromised at a later date. Being isolated from the net with proper monitoring gives a huge heads up when your build system suddenly tries to contact some random site/IP.
> i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload
From ToDesktop incident report,
> This leak occurred because the build container had broader permissions than necessary, allowing a postinstall script in an application's package.json to retrieve Firebase credentials. We have since changed our architecture so that this can not happen again, see the "Infrastructure and tooling" and "Access control and authentication" sections above for more information about our fixes.
I'm curious to know what the trial/error here was to get their machine to spit out the build or if it was in one-shot
Yeah, it is their fault. I don't download "todesktop" (to-exploit), I download Cursor. Don't give 3rd parties push access to all your clients, that's crazy. How can this crappy startup build server sign a build for you? That's insane.
it blows me away that this is even a product. it's like a half day of dev time, and they don’t appear to have over-engineered it or even done basic things given the exploit here.
Software developers don't actually write software anymore, they glue together VC-funded security nightmares every 1-3 years, before moving on to the next thing. This goes on and on until society collapses under its own weight.
I’d like to see some thoughts on where we go from here. Is there a way we can keep end users protected even despite potential compromise of services like ToDesktop?
(eg: companies still hosting some kind of integrity checking service themselves and the download is verified against that… likely there’s smarter ideas)
The user experience of auto-update is great, but having a single fatal link in the chain seems worrying. Can we secure it better?
We have reviewed logs and inspected app bundles. No malicious usage was detected. There were no malicious builds or releases of applications from the ToDesktop platform.
Is there an easy way to validate the version of Cursor one is running against the updated version by checking a hash or the like?
"the build container now has a privileged sidecar that does all of the signing, uploading and everything else instead of the main container with user code having that logic."
Does this info about the fix seem alarming to anyone else? It's not a full description, so maybe some important details are left out? My understanding is that containers are generally not considered a secure enough boundary. Companies such as AWS use micro VMs (Firecracker) for secure multi tenant container workloads.
> [please don't] make it seem like it's their fault, it's not. it's todesktop's fault if anything
What?! It's not some kind of joke. This could _already_ literally kill people, stole money and ruin lives.
It isn't even an option to avoid taking reaponsibility for the decisions which lead to security and safety of users for any app owner/author.
It's as simple as this: no safety record to 3rd party - no trust, for sure. No security audit - no trust. No transparency in the audit - no trust.
Failing to make the right decision does not exempt from the liability, and should not.
Is it a kindergarden with "it's not me, it's them" play? It does not matter who failed, the money could has been be stolen already from the random ones (who just installed an app wrapped with this todesktop installer), and journalists could have been tracked and probably already killed in some dictatorship or conflict.
Bad decisions does not always make the bad owner.
But don't take it lightly, and don't advocate (for those who just paid you some money) "oh, they are innocent". As they are not. Be a grown-up, please, and let's make this world better together.
The problem is that this entire sclerotic industry is so allergic to accountability, that, if you want people to start, you probably have to fire 90% of the workforce. If it were up to me, the developers responsible for this would never write software "professionally" again.
I can't post things like "what a bunch of clowns" due to hacker news guidelines so let me go by another more productive route.
These people, the ones who install dependencies (that install dependencies)+, these people who write apps with AI, who in the previous season looped between executing their code and searching the error on stackoverflow.
Whether they work for a company or have their own startup, the moment that they start charging money, they need to be held liable when shit happens.
When they make their business model or employability advantage to take free code in the internet, add pumpkin spice and charge cash for it, they cross the line from pissing passionate hackers by defiling our craft, to dumping in the pool and ruining it for users and us.
It is not sufficient to write somewhere in a contract that something is as is and we hold harmless and this and that. Buddy if you download an ai tool to write an ai tool to write an ai tool and you decided to slap a password in there, you are playing with big guns, if it gets leaked, you are putting other services at risk, but let's call that a misdemeanor. Because we need to reserve something stronger for when your program fails silently, and someone paid you for it, and they relied on your program, and acted on it.
That's worse than a vulnerability, there is no shared responsibility, at least with a vuln, you can argue that it wasn't all your fault, someone else actively caused harm. Now are we to believe the greater risk of installing 19k dependencies and programming ai with ai is vulns? No! We have a certainty, not a risk, that they will fuck it up.
Eventually we should license the field, but for now, we gotta hold devs liable.
Give those of us who do 10 times less, but do it right, some kind of marketing advantages, it shouldn't be legal that they are competing with us. A vscode fork got how much in VC funding?
My brothers lets take arms and defend. And defend quality software I say. Fear not writing code, fear not writing raw html, fear not, for they don't feel fear so why should you?
Ironically, it actually helped me stay focused on the article. Kind of like a fidget toy. When part of my brain would get bored, I could just move the cat and satisfy that part of my brain while I keep reading.
I know that sounds kind of sad that my brain can't focus that well (and it is), but I appreciated the cat.
The javascript world has a culture of lots of small dependencies that end up becoming a huge tree no one could reasonable vendor or audit changes for. Worse these small dependencies churn much faster than for other languages.
With that culture supply chain attacks and this kind of vulnerability will keep happening a lot.
You want few dependencies, you want them to be widely used and you want them to be stable. Pulling in a tree of modules to check if something is odd or even isn't a good idea.
With rhe number of dependencies and dependency trees going multiple levels deep? Third party risk is the largely unaddressed elephant in the room that companies don't care about.
-paid operating system (rhel) with a team of paid developers and maintainers verifying builds and dependencies.
- empty dependencies. Only what the core language provides.
It's not that great of a sacrifice. Like 20$/mo for the OS. And like 2 days of dev work which pays itself off in the long run by avoiding a mass of code you don't understand
I'm shocked at how insecure most software is these days. Probably 90% of software built by startups has a critical vulnerability. It seems to keep getting worse year on year. Before, you used to have to have deep systems knowledge to trigger buffer overflows. It was more difficult to find exploits. Nowadays, you just need basic understanding of some common tools, protocols and languages like Firebase, GraphQL, HTTP, JavaScript. Modern software is needlessly complicated and this opens up a lot of opportunities.
> security incidents happen all the time, its natural. what matters is the company's response, and todesktop's response has been awesome, they were very nice to work with.
My goodness. So much third-party risk upon risk and lots of external services opening up this massive attack surface and introducing this RCE vulnerability.
From an Electron bundler service, to sourcemap extraction and now an exposed package.json with the container keys to deploy any app update to anyone's machine.
This isn't the only one, the other day Claude CLI got a full source code leak via the same method from its sourcemaps being exposed.
But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
> But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
You've always been able to do the first thing though: the only thing you can do is obfuscate the source map, but it's not like that's a substantial slowdown when you're hunting for authentication points (identify API URLs, work backwards).
And things like credentials in package.json is just a sickness which is global to computing right now: we have so many ways you can deploy credentials, basically 0 common APIs which aren't globals (files or API keys) and even fewer security tools which acknowledge the real danger (protecting me from my computers system files is far less valuable then protecting me from code pretending to be me as my own user - where all the real valuable data already is).
Basically I'm not convinced our security model has ever truly evolved beyond the 1970s where the danger was "you damage the expensive computer" rather then "the data on the computer is worth orders of magnitude more then the computer".
Blaming Js/Ts is ridiculous. All those same problems exist in all environments. Js/Ts is the biggest so it gets the most attention but if you think it's different in any other environment you're fooling yourself.
No, the absolute worst developers I've ever met are JS/TS developers. The entire ecosystem is a superfund site, courtesy of get-rich-quick bootcamps and the rent-seeking SaaS economy. Some tech bro spent three months teaching your entire company nothing but React; how good did you think your software was going to be?
it’s a blog. people regularly use their personal sites to write in a tone and format that they are fond of. i only normally feel like i see this style from people who were on the internet in the 90s. i’d imagine we would see it even more if phones and auto correct didn’t enforce a specific style. imagine being a slave to the shift key. it can’t even fight back! i’m more upset the urls aren’t actually clickable links.
Finding an RCE for every computer running cursor is cool, and typing in all lowercase isn’t that cool. Finding an RCE on millions of computers has much much higher thermal mass than typing quirks, so the blog post makes typing in all lowercase cool.
As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.
Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.
If you're an Electron developer (like the apps mentioned), I recommend:
* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.
* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.
* You probably want to rotate your certificates if you ever gave anyone else access.
* Lastly, you should probably be the only one with the keys to your update server.
How about we don't build an auto-updater? Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible. Touching files on a user's system should be treated as a rare special occurrence. If a server is involved with the app, build a stable interface and think long and hard about every change. Meticulously version and maintain everything. If a server is involved, it is completely unacceptable for a server-side change to break an existing user's local application unless it is impossible to avoid - it should be seen as an absolute last resort with an apology to affected customers (agree with OP on this one).
It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).
You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.
Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.
We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).
Sounds like you come from the B2B, consultancyware or 6÷ figure/year license world.
For the vast realm of <300$/year products, the ones that actually use updaters, all your suggestions are completely unviable.
> Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible.
That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.
Have been there, done that.
The answer is a support window. If they are in bounds and have active maintenance contracts, support them.
If not, give them an option to get on support, or wish them luck.
Then the other answer is to really think releases through.
None of it is cheap. But it can be managed.
You're allowed to have a support matrix. You can refuse to support versions that are too old, but you can also just... let people keep using programs on their own computers.
Yep.
And anyone who does will find a percentage of users figure it out and then just get back to work.
I do agree with you but I think that unfortunately you are wrong on the job of updates. You have an idealistic vision that I share but well, it remains idealistic.
Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.
For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.
This is actually precisely how package management works in Linux today... you release new versions, package maintainers package and release them, while ensuring they actually work. This is a solve problem, it's just that nobody writing JavaScript is old enough to realize it's an option.
Question.
I've noticed a lot of websites import from other sites, instead of local.
<script src="scriptscdn.com/libv1.3">
I almost never see a hash in there. Is this as dangerous as it looks, why don't people just use a hash?
1. Yes
2. Because that requires you to know how to find the hash and add it.
Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.
Well, to be honest, the browsers could super easily solve that. In dev mode, just issue a warning "loaded script that has hash X but isn't statically defined. This is a huge security risk. Read more here" and that's it. Then you can just add the script, run the site, check the logs and add the hash, done.
You can define a CSP header to only exec 3rd Party scripts with known hashes
But that doesn't make it easy to integrate a new script from an author who doesn't provide the hash already.
Yes it is. Hashes must absolutely be used in that case.
It should just not be done at all. But the main browser vendor loves tracking so they won't forbid this.
Maybe, but just from a security point of view it's totally fine.
Getting tracked is less secure than not getting tracked.
Getting hacked is less secure than getting tracked.
Hi. I'm an electron app developer. I use electron builder paired with AWS S3 for auto update.
I have always put Windows signing on hold due to the cost of commercial certificate.
Is the Azure Trusted Signing significantly cheaper than obtaining a commercial certificate? Can I run it on my CI as part of my build pipeline?
Azure Trusted Signing is one of the best things Microsoft has done for app developers last year, I'm really happy with it. It's $9.99/month and open both to companies and individuals who can verify their identity (it used to only be companies). You really just call signtool.exe with a custom dll.
I wrote @electron/windows-sign specifically to cover it: https://github.com/electron/windows-sign
Reference implementation: https://github.com/felixrieseberg/windows95/blob/master/forg...
The big limitation with Azure Trusted Signing is that your organization needs to be at least 3 years old. Seems to be a weird case where developers that could benefit from this solution are pushed towards doing something else, with no big reason to switch back later.
That limitation should go away when Trusted Signing graduates from preview to GA. The current limitation is because the CA rules say you must perform identity validation of the requester for orgs younger than 3 years old, which Microsoft isn't set up for yet.
Hi. This is very helpful. Thanks for sharing!
> For Windows signing, use Azure Trusted Signing
I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.
I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?
Not sure if this is FUD spread by the EV CA's or not though?
You are correct that an EV (Extended Validation) code signing certificate is required for signing Windows kernel-mode drivers. (learn.microsoft.com) For regular user-space applications, an EV certificate helps improve reputation with Microsoft SmartScreen, reducing security warnings when users download and install the software. (ssl.com)
However, some developers have reported that using an EV certificate does not always immediately eliminate SmartScreen warnings. (stackoverflow.com) So, while an EV certificate can accelerate reputation building, it does not guarantee the instant removal of SmartScreen warnings.
Question that I hope you can help me. I'm working on a Electron app that works offline. I am plan to sell it cheap, like $5 one payment.
It won't have licenses or anything, so if somebody wants to distribute it outside my website they will be able to do it.
If I just want to point to a exe file link in S3 without auto updates, should just compile and upload be enough?
And yet, tons of developers install github apps that ask for full permissions to control all repos and can therefore do to same things to every dev usings those services.
github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.
IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.
As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"
Why spend that effort when any code you run on your machine (such as dependency post-install scripts, or the dependencies themselves!) can just run `gh auth token` can grab a token for all the code you push up.
By design, the gh cli wants write access to everything on github you can access.
I personally haven't worked with many of the github apps that you seem to refer to but the few that I've used are only limited to access the specific repositories that I give and within those repositories their access control is scoped as well. I figured this is all stuff that can be controlled on Github's side. Am I mistaken?
I will note that at least for our GitHub enterprise setup permissions are all granular, tokens are managed by the org and require an approval process.
I’m not sure how much of this is “standard” for an org though.
Yeah, turns out "modern" software development has more holes than Swiss cheese. What else is new?
You know, there's this nice little thing called AppStore on the mac, and it can auto update
All apps on the Mac AppStore have to be sandboxed, which is great for the end-user, but a pain in the neck for the run of the mill electron app dev.
Dave here, founder of ToDesktop. I've shared a write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...
This vulnerability was genuinely embarrassing, and I'm sorry we let it happen. After thorough internal and third-party audits, we've fundamentally restructured our security practices to ensure this scenario can't recur. Full details are covered in the linked write-up. Special thanks to Eva for responsibly reporting this.
> cannot happen again.
Hubris. Does not inspire confidence.
> We resolved the vulnerability within 26 hours of its initial report, and additional security audits were completed by February 2025.
After reading the vulnerability report, I am impressed at how quickly you guys jumped on the fix, so kudos. Did the security audit lead to any significant remediation work? If you weren't following PoLP, I wonder what else may have been overlooked?
Fair point. Perhaps better phrased as "to ensure this scenario can't recur.". I'll edit my post.
Yes, we re-architected our build container as part of remediation efforts, it was quite significant.
That was solid. Nice way to handle a direct personal judgement!
Not your first rodeo.
Another way is to avoid absolutes and ultimatums as aggressively as one should avoid personal judgements.
Better phrased as: "we did our best to prevent this scenario from happening again.
Fact is it just could happen! Nobody likes that reality, and overall when we think about all this stuff, networked computing is a sad state of affairs..
Best to just be 100 percent real about it all, if you ask me.
At the very least people won't nail you on little things, which leaves you something you may trade on when a big thing happens.
And yeah, this is unsolicited and worth exactly what you paid. Was just sharing where I ended up on these things in case it helps
Based on the claims on the blog, it feels reasonable to say that this "cannot" occur again.
Based on which claim? That 12 months from now they might accidentally discover a new bug just as serious?
[flagged]
This is the wrong response, because that means that the learning would be lost. The security community didn't want that to happen when one of the CA's got a vulnerability, we do not want it to happen to other companies. We want companies to succeed and get better, being shameful doesn't help towards that. Learning the right lessons does, and resigning means that you are learning the wrong ones.
I don't think the lesson is lost. The opposite.
If you get a slap on the wrist, do you learn? No, you play it down.
However if a dev who gets caught doing a bad is forced to resign. Then all the rest of the devs doing the same thing will shape up.
> If you get a slap on the wrist, do you learn? No, you play it down.
Except Dave didn't play it down. He's literally taking responsibility for a situation that could have resulted in significantly worse consequences.
Instead of saying, "nothing bad happened, let's move on," he, and by extension his company, have worked to remedy the issue, do a write up on it, disclose of the issue and its impact to users, and publicly apologize and hold themselves accountable. That right there is textbook engineering ethics 101 being followed.
I suggest reading one or two of Sydney Dekker’s books, which are a pretty comprehensive takedown of this idea. If an organization punishes mistakes, mistakes get hidden, covered up, and no less frequent.
Is it Dekker?
https://www.goodreads.com/book/show/578243.Field_Guide_to_Hu...
Sure is, autocorrect got me.
> However if a dev who gets caught doing a bad is forced to resign.
then nearly everyone involved has incentive to coverup problem or to shift blame
Can you back up your theory with the example of all the mistakes you have committed and force resigned taken?
Under what theory of psychology are you operating? This is along the same lines as the theory that punishment is an effective deterrent of crime, which we know isn’t true from experience.
While I think that resigning is stupid here, asserting that "punishment doesn't deter crime" is just absurd. It does!
The overwhelming majority of evidence suggests otherwise.
https://www.psychologytoday.com/us/blog/crime-and-punishment...
https://www.unsw.edu.au/newsroom/news/2020/07/do-harsher-pun...
https://www.ojp.gov/pdffiles1/nij/247350.pdf
https://www.helsinki.fi/en/news/economics/do-harsh-punishmen...
> While I think that resigning is stupid here, asserting that "punishment doesn't deter crime" is just absurd. It does!
Punishment does not deter crime. The threat of punishment does to a degree.
IOW, most people will be unaware of a person being sent to prison for years until and unless they have committed a similar offense. But everyone is aware of repercussions possible should they violate known criminal laws.
this is probably one of the worst takes i've ever read on here
Honestly I don't get why people are hating this response so much.
Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.
> we've fundamentally restructured our security practices to ensure this scenario can't recur
People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".
To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.
So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).
Thank you.
Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.
Our disclosure write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...
> We have reviewed logs and inspected app bundles.
Were the logs independent of firebase? (Could someone exploiting this vulnerability have cleaned up after themselves in the logs?)
Annual pen tests are great, but what are you doing to actually improve the engineering design process that failed to identify this gap? How can you possibly claim to be confident this won't happen again unless you myopically focus on this single bug, which itself is a symptom of a larger design problem.
These kinds of "never happen again" statements never age well, and make no sense to even put forward.
A more pragmatic response might look like: something similar can and probably will happen again, just like any other bugs. Here are the engineering standards we use ..., here is how they compare to our peers our size ..., here are our goals with it ..., here is how we know when to improve it...
How can -let's say- Cursor users be sure they were not compromised?
> No malicious usage was detected
Curious to hear about methods used if OK to share, something like STRIDE maybe?
from todesktop's report:
> Completed a review of the logs. Confirming all identified activity was from the researcher (verified by IP Address and user agent).
With privileged access, the attackers can tamper with the evidence for repudiation, so although I'd say "nothing in the logs" is acceptable, not everyone may. These two attack vectors are part of the STRIDE threat modeling approach.
They don’t elaborate on the logging details, but certainly must good systems don’t allow log tampering even for admins.
How confident are you that their log system is resilient, given the state of the rest of their software?
What horrible form not contacting affected customers right away after performing the patch.
Who knows what else was vulnerable in your infrastructure when you leaked .encrypted like that.
It should have been on your customers to decide if they still wanted to use your services.
This should be considered criminal negligence.
how much of a bounty was paid to Eva for this finding?
> they were nice enough to compensate me for my efforts and were very nice in general.
They were compensated, but doesn't elaborate.
Sounds like it was handled better than the authors last article where the Arc browser company initially didn't offer any bounty for a similar RCE, then awarded a paltry $2k after getting roasted, and finally bumped it up to $20k after getting roasted even more.
They later updated their post, at the bottom:
> for those wondering, in total i got 5k for this vuln, which i dont blame todesktop for because theyre a really small company
50.000$ additional to the first 5.000$ :)
Woooowwww!
See latest line: "update: cursor (one of the affected customers) is giving me 50k USD for my efforts."
> for those wondering, in total i got 5k for this vuln
no offense man but this is totally inexcusable and there is zero chance i am ever touching anything made by y'all, ever
Good call. I'd seriously considering firing the developers responsible, too.
That's what a bad manager would do.
The employee made a mistake and you just paid for them to learn about it. Why would you fire someone you just educated?
How's that working out for the industry?
Don't worry man, it's way more embarassing for the people that downloaded your dep or any upstream tool.
If they didn't pay you a cent, you have no liability here.
This is not how the law works anywhere, thankfully.
Well for one it was a gift so there is no valid contract right? There are no direct damages because there is nothing paid and nothing to refund. Wrt indirect damages, there's bound to be a disclaimer or two, at least at the app layer.
IANAL, not legal advice
I’d suppose there is an ALL CAPS NO WARRANTY clause as well, as is customary with freeware (and FOSS). ToDesktop is a paid product, though.
This is the second big attack found by this individual in what... 6 months? The previous exploit (which was in Arc browser), also leveraged a poorly configured firebase db: https://kibty.town/blog/arc/
So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.
Firebase let's anyone get started in 30 seconds.
Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.
I use firebase essentially for hobbyist projects for me and my friends.
If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.
> Google isn't to blame if you ship a paid product without running a security audit.
Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.
What if I need to hack together a POC for 3 people to look at.
It's my responsibility to make sure when we scale from 3 users to 30k users we take security seriously.
As my old auto shop teacher used to say, if you try to idiot proof something they'll build a better idiot.
Even if Google warns you in big bold print "YOU ARE DOING SOMETHING INSECURE", someone out there is going to click deploy anyway. You're arguing Google disable the deploy button, which I simply disagree with.
I think that's throwing the baby out with the bathwater; sane defaults are still an important thing to think about when developing a product. And for something as important as a database, which usually requires authentication or storing personal information, let your tutorials focus on these pain points instead of the promise of a database-driven app with only clientside code. It's awesome, but I think it deserves the notoriety for letting you shoot yourself in the foot and landing on the front page of HN. Author also found a similar exploit via Firebase for the Arc Browser[0]
I have a similar qualm with GraphQL.
[0] https://kibty.town/blog/arc/
Any purported expert who uses software without considering its security is simply negligent. I'm not sure why people are trying to spin this to avoid placing the blame on the negligent programmer(s).
Weak programmers do this to defend this group making crap software. I agree that defaults should be secure and maybe there should be request limit on admin, full access token - but then people will just create another token with full access and use it.
Then you should have to click a big red button labelled "Enable insecure mode".
Defaults should be secure. Kind of blows my mind people still don't get this.
Oh they are. Just like mongo and others. It’s a deliberate decision to remove basic security features in order to get traction.
Remove as much hurdles to increase adoption.
Should we outlaw C because it lets you dereference null pointers, too?
Erm yes! Even the White House has said that.
The only reason we didn't for so long was because we didn't have a viable alternative. Now we do, we should absolutely stop writing C.
I don't think Firebase is really at fault here—the major issue they highlighted is that the deployment pipeline uploaded the compiled artifact to a shared bucket from a container that the user controlled. This doesn't have anything to do with firebase—it would have been just as impactful if the container building the code uploaded it to S3 from the buildbot.
Agreed. I recently stumbled upon the fact that even Hacker News is using Firebase for exposing an API for articles. Caution should be taken when writing server-side software in general.
Turns out, "knowing things" is a prerequisite for "doing things properly".
The problem is that if there is a security incident, basically nobody cares except for some of us here. Normal people just ignore it. Until that changes, nothing you do will change the situation.
I'm sorry, but when will we hold the writers of crappy code responsible for their own bad decisions? Let's start there.
I always find unbelievable how we NEVER hold developers accountable. Any "actual" Engineer would be (at least the one signing off, but in software developers never sign off anything - and maybe that's the problem).
I don't know but we're in a thread about Cursor... I don't think anyone is writing significantly better code using Cursor.
> update: cursor (one of the affected customers) is giving me 50k USD for my efforts.
Kudos to cursor for compensating here. They aren't necessarily obliged to do so, but doing so demonstrates some level of commitment to security and community.
I'm a huge fan of the writing style. it's like hacking gonzo, but with literally 0 fluff. amazing work and an absolute delight to read from beginning to end
Capital letters aren't hard to use and help to make sentences stand out from each other properly. The overall style is good but the lowercase thing is obnoxious.
Obnoxious is a bit harsh - I liked the feeling it gave to the article, found it very readable and I had no trouble discerning sentences, especially with how they were broken up into paragraphs.
"i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload"
Just want to make sure I understand this. They made a hello world app and submitted it to todesktop with a post install script that opened a reverse shell on the todesktop build machine? Maybe I missed it but that shouldn't be possible. Build machine shouldn't have outbound open internet access right?? Didn't see that explained clearly but maybe I'm missing something or misunderstanding.
In what world do you have a machine which downloads source code to build it, but doesn't have outbound internet access so it can't download source code or build dependencies?
Like, effectively the "build machine" here is a locked down docker container that runs "git clone && npm build", right? How do you do either of those activities without outbound network access?
And outbound network access is enough on its own to create a reverse shell, even without any open inbound ports.
The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.
It's common, doesn't mean it's secure. A lot of linux distros in their packaging will separate download (allows outbound to fetch dependencies), from build (no outside access).
Unfortunately, in some ecosystems, even downloading packages using the native package managers is unsafe because of postinstall scripts or equivalent.
Even if your builders are downloading dependencies on the fly, you can and should force that through an artifact repository (e.g. artifactory) you control. They shouldn't need arbitrary outbound Internet access. The builder needs a token injected with read-only pull permissions for a write-through cache and push permissions to the path it is currently building for. The only thing it needs to talk to is the artifactory instance.
In a world with an internal proxy/mirror for dependencies and no internet access allowed by build systems.
Which is not the world we live in.
Speak for yourself.
s/we/I/
If you don't network isolate your build tooling then how do you have any confidence that your inputs are what you believe them to be? I run my build tools in a network namespace with no connection to the outside world. The dependencies are whatever I explicitly checked into the repo or otherwise placed within the directory tree.
You don't have any confidence beyond what lockfiles give you (which is to say the npm postinstall scripts could be very impure, non-hermetic, and output random strings). But if you require users to vendor all their dependencies, fully isolate all network traffic during build, be perfectly pure and reproducible and hermetic, presumably use nix/bazel/etc... well, you won't have any users.
If you want a perfectly secure system with 0 users, it's pretty easy to build that.
Most banks and larger enterprises do exactly this. Devs don't get to go out and pick random libraries with out a code review and then it's placed on a local repository.
There are just far too many insecure and 'typo' malware to pull off the internet raw.
Hell, even just an unrestricted internal proxy at least gives you visibility after the fact.
> But if you require users
I'm not suggesting that a commercial service should require this. You asked "In what world do you have ..." and I'm pointing out that it's actually a fairly common practice. Particularly in any security conscious environment.
Anyone not doing it is cutting corners to save time, which to be clear isn't always a bad thing. There's nothing wrong if my small personal website doesn't have a network isolated fully reproducible build. On the other hand, any widely distributed binaries definitely should.
For example, I fully expect that my bank uses network isolated builds for their website. They are an absolutely massive target after all.
There are plenty of worlds that take security more seriously and practice defense in depth. Your response could use a little less hubris and a more genuinely inquisitive tone. Looks like others have already chimed in here but to respond to your (what feels like sarcasm) questions:
- You can have a submission process that accepts a package or downloads dependencies, and then passes it to another machine that is on an isolated network for code execution / build which then returns the built package and logs to the network facing machine for consumption.
Now sure if your build machine is still exposing everything on it to the user supplied code (instead of sandboxing the actual npm build/make/etc.. command) you could insert malicious code that zips up the whole filesystem, env vars, etc.. and exfiltrates them through your built app in this case snagging the secrets.
I don't disagree that the secrets on the build machine were the big miss, but I also think designing the build system differently could have helped.
You have to meet your users where they are. Your users are not using nix and bazel, they're using npm and typescript.
If your users are using bazel, it's easy to separate "download" from "build", but if you're meeting your users over here where cows aren't spherical, you can't take security that seriously.
Security doesn't help if all your users leave.
The simple solution would be to check your node-modules folder into source control. Then your build machine wouldn’t need to download anything from anywhere except your repository.
It's called air-gapping, and lots of adults do it.
you use a language where you have all your deps local to the repo? ie go vendor?
you can always limit said network access to npm.
You can't since a large number of npm post-install scripts also make random arbitrary network calls.
This includes things like downloading and compiling pre-compiled binaries for the native architecture hosted on random servers.
npm is really cool.
npm is a nightmare.
Note that without a reverse shell you could still leak the secrets in the built artifact itself.
Isn't it really common for build machines to have outbound internet access? Millions of developers use GitHub Actions for building artifacts and the public runners definitely have outbound internet access
Indeed, you can indeed punch out from an actions runner. Such a thing is probably against GitHub's ToS, but I've heard from my third cousin twice removed that his friend once ssh'ed out from an action to a bastion host, then used port forwarding to get herself a shell on the runner in order to debug a failing build.
> probably against GitHub's ToS, but
Why would running code on a github action runner that's built to run code be against ToS?
If it was, I'm sure they'd ban the marketplace extensions that make it absolutely trivial to do this: https://github.com/marketplace/actions/debugging-with-ssh
So this friend escaped from the ephemeral container VM into the build host which happened to have a private SSH on it that allowed it to connect to a bastion host to... go back to the build host and debug a failed build that should be self-contained inside the container VM which they already had access in the first place by the means of, you know, running a build on it? Interesting.
A few decades ago, it was also really common to smoke. Common != good, github actions isn't a true build tool, it's an arbitrary code runtime platform with a few triggers tied to your github.
It is and regardless a few other commenters saying or hinting it isn't...it is. An air gapped build machine wouldn't work for most software built today.
Strange. How do things like Nix work then? The nix builders are network isolated. Most (all?) Gentoo packages can also be built without network access. That seems like it should cover a decent proportion of modern software.
Instances where an air gapped build machine doesn't work are examples of developer laziness, not bothering to properly document dependencies.
Sounds like a problem with modern software build practices to me.
Ya too many people think it's a great idea to raw dog your ci/cd on the net and later get newspaper articles written about the data leak.
The number of packages that is malicious is high enough, then you have typo packages, and packages that get compromised at a later date. Being isolated from the net with proper monitoring gives a huge heads up when your build system suddenly tries to contact some random site/IP.
> i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload
From ToDesktop incident report,
> This leak occurred because the build container had broader permissions than necessary, allowing a postinstall script in an application's package.json to retrieve Firebase credentials. We have since changed our architecture so that this can not happen again, see the "Infrastructure and tooling" and "Access control and authentication" sections above for more information about our fixes.
I'm curious to know what the trial/error here was to get their machine to spit out the build or if it was in one-shot
" please do not harass these companies or make it seem like it's their fault, it's not. it's todesktop's fault if anything) "
I don't get it. Why would it be "todesktop's fault", when all the mentioned companies allowed to push updates?
I had these kind of discussions with naive developers giving _full access_ to GitHub orgs to various 3rd party apps -- that's never right!
Yeah, it is their fault. I don't download "todesktop" (to-exploit), I download Cursor. Don't give 3rd parties push access to all your clients, that's crazy. How can this crappy startup build server sign a build for you? That's insane.
it blows me away that this is even a product. it's like a half day of dev time, and they don’t appear to have over-engineered it or even done basic things given the exploit here.
Software developers don't actually write software anymore, they glue together VC-funded security nightmares every 1-3 years, before moving on to the next thing. This goes on and on until society collapses under its own weight.
I’d like to see some thoughts on where we go from here. Is there a way we can keep end users protected even despite potential compromise of services like ToDesktop?
(eg: companies still hosting some kind of integrity checking service themselves and the download is verified against that… likely there’s smarter ideas)
The user experience of auto-update is great, but having a single fatal link in the chain seems worrying. Can we secure it better?
Love the blog aesthetic, and the same goes to all your friends (linked at the bottom).
The lack of capitalization made it difficult for me to quickly read sentences. I had to be much more intentful when scanning the text.
> please do not harass these companies or make it seem like it's their fault, it's not
It also is, they are responsible for which tech pieces they pick in constructing their own puzzle
From the ToDesktop write-up:
Is there an easy way to validate the version of Cursor one is running against the updated version by checking a hash or the like?"the build container now has a privileged sidecar that does all of the signing, uploading and everything else instead of the main container with user code having that logic."
Does this info about the fix seem alarming to anyone else? It's not a full description, so maybe some important details are left out? My understanding is that containers are generally not considered a secure enough boundary. Companies such as AWS use micro VMs (Firecracker) for secure multi tenant container workloads.
> [please don't] make it seem like it's their fault, it's not. it's todesktop's fault if anything
What?! It's not some kind of joke. This could _already_ literally kill people, stole money and ruin lives.
It isn't even an option to avoid taking reaponsibility for the decisions which lead to security and safety of users for any app owner/author.
It's as simple as this: no safety record to 3rd party - no trust, for sure. No security audit - no trust. No transparency in the audit - no trust.
Failing to make the right decision does not exempt from the liability, and should not.
Is it a kindergarden with "it's not me, it's them" play? It does not matter who failed, the money could has been be stolen already from the random ones (who just installed an app wrapped with this todesktop installer), and journalists could have been tracked and probably already killed in some dictatorship or conflict.
Bad decisions does not always make the bad owner.
But don't take it lightly, and don't advocate (for those who just paid you some money) "oh, they are innocent". As they are not. Be a grown-up, please, and let's make this world better together.
The problem is that this entire sclerotic industry is so allergic to accountability, that, if you want people to start, you probably have to fire 90% of the workforce. If it were up to me, the developers responsible for this would never write software "professionally" again.
Bit breathless. How could this kill people?
I can't post things like "what a bunch of clowns" due to hacker news guidelines so let me go by another more productive route.
These people, the ones who install dependencies (that install dependencies)+, these people who write apps with AI, who in the previous season looped between executing their code and searching the error on stackoverflow.
Whether they work for a company or have their own startup, the moment that they start charging money, they need to be held liable when shit happens.
When they make their business model or employability advantage to take free code in the internet, add pumpkin spice and charge cash for it, they cross the line from pissing passionate hackers by defiling our craft, to dumping in the pool and ruining it for users and us.
It is not sufficient to write somewhere in a contract that something is as is and we hold harmless and this and that. Buddy if you download an ai tool to write an ai tool to write an ai tool and you decided to slap a password in there, you are playing with big guns, if it gets leaked, you are putting other services at risk, but let's call that a misdemeanor. Because we need to reserve something stronger for when your program fails silently, and someone paid you for it, and they relied on your program, and acted on it.
That's worse than a vulnerability, there is no shared responsibility, at least with a vuln, you can argue that it wasn't all your fault, someone else actively caused harm. Now are we to believe the greater risk of installing 19k dependencies and programming ai with ai is vulns? No! We have a certainty, not a risk, that they will fuck it up.
Eventually we should license the field, but for now, we gotta hold devs liable.
Give those of us who do 10 times less, but do it right, some kind of marketing advantages, it shouldn't be legal that they are competing with us. A vscode fork got how much in VC funding?
My brothers lets take arms and defend. And defend quality software I say. Fear not writing code, fear not writing raw html, fear not, for they don't feel fear so why should you?
I will: what a bunch of clowns.
https://civboot.org
Join me my brother or sister
tbh if i had one wish i would love to see how five eyes get root level access to every device seems an insane amount of data
The cat is cute but I'd rather not have it running in front of the text while I'm trying to read and use my cursor.
I had to go back and enable JavaScript. Wow, is the goal to direct my attention away from reading the text?
Ironically, it actually helped me stay focused on the article. Kind of like a fidget toy. When part of my brain would get bored, I could just move the cat and satisfy that part of my brain while I keep reading.
I know that sounds kind of sad that my brain can't focus that well (and it is), but I appreciated the cat.
I can't see the cat! I went back and it just isn't working for me. I'm sad, I like cats.
Here it is:
https://en.m.wikipedia.org/wiki/Neko_(software)
Ah, whimsy memories of running that on beige boxen of my youth.
Also remember a similar thing with some Lemmings randomly falling and walking around on windows.
Played way too long having them pile up and yank the window from under them.
Then just… put the cursor in the corner? The blog isn’t interactive or anything. I think the cat is cute.
Cats tend to do that.
There are plenty of other websites that don't do that. Perhaps one of those would work better for you?
The javascript world has a culture of lots of small dependencies that end up becoming a huge tree no one could reasonable vendor or audit changes for. Worse these small dependencies churn much faster than for other languages.
With that culture supply chain attacks and this kind of vulnerability will keep happening a lot.
You want few dependencies, you want them to be widely used and you want them to be stable. Pulling in a tree of modules to check if something is odd or even isn't a good idea.
With rhe number of dependencies and dependency trees going multiple levels deep? Third party risk is the largely unaddressed elephant in the room that companies don't care about.
I started to use
-paid operating system (rhel) with a team of paid developers and maintainers verifying builds and dependencies.
- empty dependencies. Only what the core language provides.
It's not that great of a sacrifice. Like 20$/mo for the OS. And like 2 days of dev work which pays itself off in the long run by avoiding a mass of code you don't understand
As someone who already has trouble reading due to eye issues, the lack of capital letters made this infuriatingly difficult to read.
1. Build a rootkit into your product.
2. Release your product.
"range of hundreds of millions of people in tech environments, other hackers, programmers, executives, etc. making this exploit deadly if used."
Bit too hyperbolic or whatever... Otherwise thrilling read!
Can I have a rootkit into your machine then? Since we're being hyperbolic.
I guess what I'm surprised at here is that a popular ? IDE would be delivered over a delivery platform like this (immature or not)
I would've expected IDE developers to "roll their own"
I'm shocked at how insecure most software is these days. Probably 90% of software built by startups has a critical vulnerability. It seems to keep getting worse year on year. Before, you used to have to have deep systems knowledge to trigger buffer overflows. It was more difficult to find exploits. Nowadays, you just need basic understanding of some common tools, protocols and languages like Firebase, GraphQL, HTTP, JavaScript. Modern software is needlessly complicated and this opens up a lot of opportunities.
This website loads extremely fast wow
Serving HTML is actually really fast if you don't bolt 17 layers of JavaScript on top of it first.
> security incidents happen all the time
Do they have to?
Isn't this notion making developers sloppy?
Yes.
Automatic update without some manual step by a user means that the devs have RCE on your machine.
I made Signal fix this, but most apps consider it working as intended. We learned nothing from Solarwinds.
> security incidents happen all the time, its natural. what matters is the company's response, and todesktop's response has been awesome, they were very nice to work with.
This was an excellent conclusion for the article.
My goodness. So much third-party risk upon risk and lots of external services opening up this massive attack surface and introducing this RCE vulnerability.
From an Electron bundler service, to sourcemap extraction and now an exposed package.json with the container keys to deploy any app update to anyone's machine.
This isn't the only one, the other day Claude CLI got a full source code leak via the same method from its sourcemaps being exposed.
But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
The issue here is not sourcemaps being available. The issue is admin credentials being shipped to clients for no reason.
> But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
You've always been able to do the first thing though: the only thing you can do is obfuscate the source map, but it's not like that's a substantial slowdown when you're hunting for authentication points (identify API URLs, work backwards).
And things like credentials in package.json is just a sickness which is global to computing right now: we have so many ways you can deploy credentials, basically 0 common APIs which aren't globals (files or API keys) and even fewer security tools which acknowledge the real danger (protecting me from my computers system files is far less valuable then protecting me from code pretending to be me as my own user - where all the real valuable data already is).
Basically I'm not convinced our security model has ever truly evolved beyond the 1970s where the danger was "you damage the expensive computer" rather then "the data on the computer is worth orders of magnitude more then the computer".
Blaming Js/Ts is ridiculous. All those same problems exist in all environments. Js/Ts is the biggest so it gets the most attention but if you think it's different in any other environment you're fooling yourself.
No, the absolute worst developers I've ever met are JS/TS developers. The entire ecosystem is a superfund site, courtesy of get-rich-quick bootcamps and the rent-seeking SaaS economy. Some tech bro spent three months teaching your entire company nothing but React; how good did you think your software was going to be?
Ecosystem, not the lang itself.
It truly is a community issue, it's not a matter of the lang.
You will never live down fucking left-pad
[flagged]
Why does it use Neko the cursor chasing cat? Why the goth color scheme? These are stylistic choices, there is no explaining them.
Thankfully there is reader mode. That dumb cat is so obnoxious on mobile.
woah, the cat chases your taps on mobile!
it’s a blog. people regularly use their personal sites to write in a tone and format that they are fond of. i only normally feel like i see this style from people who were on the internet in the 90s. i’d imagine we would see it even more if phones and auto correct didn’t enforce a specific style. imagine being a slave to the shift key. it can’t even fight back! i’m more upset the urls aren’t actually clickable links.
Finding an RCE for every computer running cursor is cool, and typing in all lowercase isn’t that cool. Finding an RCE on millions of computers has much much higher thermal mass than typing quirks, so the blog post makes typing in all lowercase cool.
why do the stars shine? why does rain fall from the sky? using upper case is just a social convention - throw off your chains.
the cat chase cursor thing is great
its cool. not everything has to be typed in a "normal" way.
just the style of their blog
ToDesktop vulnerability: not surprised. Trust broken.
Question/idea: can't GitHub use LLMs to periodically scan the code for vulnerabilities like this and inform the repo owner?
They can even charge for it ;)
Problem: a tool built with LLMs for building LLMs with LLMs has a vuln
Solution: more LLMs
Snap out of it
So you're saying one more LLM?
Where is the paper wherein you've managed to solve the Halting Problem? I'd love to read it.