That kind of notation, called SCCS/RCS, is the equivalent of finding a rotary phone in a modern office. Nobody uses it in 2005 Windows kernel code unless their programming background goes back decades, to government and military computing environments
—
The astrophysics lab I worked at in 2006 was still using svn and had a bunch of Fortran with references to systems from the 70s and 80s. The code ran perfectly well thanks to modern optimizing compilers and having moved from Vax to Linux in the 90s, it was a surprisingly seamless transition.
It reminds me of a conference talk I’ve referenced before “do over or make due” basically implying rewriting large amounts of mostly functioning code was not worth the effort if it could be taped together with modern tools.
Yeah, I used to be skeptical of the government provenance of things like Stuxnet (I am not any more, I'm fully sold, like everyone else), and notes like this were why. People used RCS well into the 2000s! RCS as a tool had virtues over SVN and CVS.
I do wonder if these breadcrumbs were also left intentionally. “Oh look, we are using old stuff, don’t be afraid!” Or for some other reason. It is a little surprising to pull off such a sophisticated attack and miss details you could find running ‘strings’ unless I’m missing something and this part was encrypted.
I think that in the time period we're talking about, RCS wasn't really even all that old. Like, RCS is old, sure, but it was also in common use especially by Unix systems people; it's what you might have reached for by default to version your dotfiles, for instance.
Yes, but even back then I was aware of the sections in executables (wasn’t this where it was found?) and any neckbeard from the 70s and 80s might be even more so aware. That said, yeah, sure, it’s a very possible and understandable oversight, but I’m weary because of all the text in viruses and such as indicators. Seems like a pass over ‘strings’ would be obvious. Though. TIL, strings doesn’t necessarily scan the entire executable.
My favorite part of the paper is that the “attack” isn’t just exploiting a bug — it’s exploiting how different components interpret the same input. Modifying an executable as it’s loaded into memory is one example, but the deeper pattern is the mismatch.
What’s interesting about the malware in this post is that it goes one step further: instead of exploiting mismatches, it corrupts the computation itself — so every infected system agrees on the same wrong answer!
More broadly: any interpretive mismatch between components creates a failure surface. Sometimes it shows up as a bug, sometimes as an exploit primitive, sometimes as a testing blind spot. You see it everywhere — this paper, IDS vs OS, proxies vs backends, test vs prod, and now LLMs vs “guardrails.”
Fun HN moment for me: as I was about to post this, I noticed a reply from @tptacek himself. His 1998 paper with Newsham (IDS vs OS mismatches) was my first exposure to this idea — and in hindsight it nudged me toward infosec, the Atlanta scene, spam filtering (PG's bayesian stuff) and eventually YC.
The paper starts with this Einstein quote "Not everything that is counted counts and not everything that counts can be counted", which seems quite apt for the malware analyzed here :)
We used cvs, but did switch to svn before/around 2006, but I could be mixing that up. We did not switch to git even by 2012 when I left.
The reference to the 70s and 80s code didn’t imply it was version controlled before svn/cvs though if that’s what you meant, but by that time it was and still had old timestamps commented in the text files.
Does that mean that three-letter agencies were/are able to recruit from the fields for each type of malware? For example, fast16 might actually be written by someone who used to write scientific calculation software, while Stunex was written by someone who used to work for Siemens?
This is an amazing find. I'm very curious regarding the specific targets of these rules, and in the exact changes to the results. Wonder if they will only make a difference in simulated conditions super specific to nuclear reactors?
Haha it's a fun finding though; The source control comment feels a little off; I'm sure there were SCCS (hmm or did cvs use similar?) still around at that time.
I believe that comment was specific to it being unusual in Windows software, suggesting the developers were also working in UNIX stuff (where usage SCCS/RCS was common).
Thank you for sharing this. I was recently pushing the limits of precision computing and this illuminated a part of my research. It built on top of largely government funded research, where I found a surprising dearth of available precision frameworks with verification. Perhaps national security interests, as elucidated by the original poster, discourages transparency of methods for arbitrary precision calculations.
I’d be surprised if it were a lot. At that time (open to corrections) not a lot of scientific research was done on consumer intel platforms.
Obviously it was found by a mathematician, but I still suspect it wasn’t obvious in published research or that it ended up not causing significant enough deviations to cause research to revisit the calculations.
My team ran into some interesting but very small deviations when we moved our iterative solar wind model from 32 bit to 64 bit, but the changes weren’t significant enough to revisit or re-do prior research wholesale.
Like my team in the 2000s I suspect anyone who had data crunched by this bug also revisited it and either concluded it wasn’t significant enough or redid the work and it didn’t change the conclusions.
I am curious now if this bug was cited in any papers at the time to give a rough idea how aware or affected academics were.
None of the science being sabotaged was being published in peer reviewed journals was it? (besides the Portuguese hydrodynamic modeling stuff, but it could have been accidental or had other uses)
And yes, to be clear, I don’t consider it contributing to “science” if it’s not published, reviewed, and reproducible.
Scientists and engineers also invented Zyklon-B gas and built the crematoriums in the concentration camps. Don’t underestimate what scientists and engineers can do to Jews.
I was about to respond saying what a terrible article it was, as it reads as if the author has no idea what he was talking about. Attempting to paraphrase the original article would explain it.
My favorite part of this was:
That kind of notation, called SCCS/RCS, is the equivalent of finding a rotary phone in a modern office. Nobody uses it in 2005 Windows kernel code unless their programming background goes back decades, to government and military computing environments
—
The astrophysics lab I worked at in 2006 was still using svn and had a bunch of Fortran with references to systems from the 70s and 80s. The code ran perfectly well thanks to modern optimizing compilers and having moved from Vax to Linux in the 90s, it was a surprisingly seamless transition.
It reminds me of a conference talk I’ve referenced before “do over or make due” basically implying rewriting large amounts of mostly functioning code was not worth the effort if it could be taped together with modern tools.
Yeah, I used to be skeptical of the government provenance of things like Stuxnet (I am not any more, I'm fully sold, like everyone else), and notes like this were why. People used RCS well into the 2000s! RCS as a tool had virtues over SVN and CVS.
I do wonder if these breadcrumbs were also left intentionally. “Oh look, we are using old stuff, don’t be afraid!” Or for some other reason. It is a little surprising to pull off such a sophisticated attack and miss details you could find running ‘strings’ unless I’m missing something and this part was encrypted.
I think that in the time period we're talking about, RCS wasn't really even all that old. Like, RCS is old, sure, but it was also in common use especially by Unix systems people; it's what you might have reached for by default to version your dotfiles, for instance.
Yes, but even back then I was aware of the sections in executables (wasn’t this where it was found?) and any neckbeard from the 70s and 80s might be even more so aware. That said, yeah, sure, it’s a very possible and understandable oversight, but I’m weary because of all the text in viruses and such as indicators. Seems like a pass over ‘strings’ would be obvious. Though. TIL, strings doesn’t necessarily scan the entire executable.
My favorite part of the paper is that the “attack” isn’t just exploiting a bug — it’s exploiting how different components interpret the same input. Modifying an executable as it’s loaded into memory is one example, but the deeper pattern is the mismatch.
What’s interesting about the malware in this post is that it goes one step further: instead of exploiting mismatches, it corrupts the computation itself — so every infected system agrees on the same wrong answer!
More broadly: any interpretive mismatch between components creates a failure surface. Sometimes it shows up as a bug, sometimes as an exploit primitive, sometimes as a testing blind spot. You see it everywhere — this paper, IDS vs OS, proxies vs backends, test vs prod, and now LLMs vs “guardrails.”
Fun HN moment for me: as I was about to post this, I noticed a reply from @tptacek himself. His 1998 paper with Newsham (IDS vs OS mismatches) was my first exposure to this idea — and in hindsight it nudged me toward infosec, the Atlanta scene, spam filtering (PG's bayesian stuff) and eventually YC.
https://users.ece.cmu.edu/~adrian/731-sp04/readings/Ptacek-N...
The paper starts with this Einstein quote "Not everything that is counted counts and not everything that counts can be counted", which seems quite apt for the malware analyzed here :)
>in 2006 was still using svn
Perhaps you meant cvs? Subversion was released in 2004 and git appeared in 2005.
We used cvs, but did switch to svn before/around 2006, but I could be mixing that up. We did not switch to git even by 2012 when I left.
The reference to the 70s and 80s code didn’t imply it was version controlled before svn/cvs though if that’s what you meant, but by that time it was and still had old timestamps commented in the text files.
Subversion 1.0 was released in 2004, but it was already widely used before then.
Does that mean that three-letter agencies were/are able to recruit from the fields for each type of malware? For example, fast16 might actually be written by someone who used to write scientific calculation software, while Stunex was written by someone who used to work for Siemens?
Download link for anyone who is curious enough:
https://bazaar.abuse.ch/sample/9a10e1faa86a5d39417cae44da5ad...
I'll probably build a Windows XP VM first.
This is an amazing find. I'm very curious regarding the specific targets of these rules, and in the exact changes to the results. Wonder if they will only make a difference in simulated conditions super specific to nuclear reactors?
heh the key move is the worm. you can't catch it by checking on a second box because there is no clean box.
Haha it's a fun finding though; The source control comment feels a little off; I'm sure there were SCCS (hmm or did cvs use similar?) still around at that time.
I believe that comment was specific to it being unusual in Windows software, suggesting the developers were also working in UNIX stuff (where usage SCCS/RCS was common).
Thank you for sharing this. I was recently pushing the limits of precision computing and this illuminated a part of my research. It built on top of largely government funded research, where I found a surprising dearth of available precision frameworks with verification. Perhaps national security interests, as elucidated by the original poster, discourages transparency of methods for arbitrary precision calculations.
sabotaging science must be the most morally corrupt thing you can do as a civilisation
Nah; it's to prevent a country from developing a superweapon and possibly triggering WW3 / worldwide nuclear annihilation.
This comment is very exaggerated, I can think of a few more "morally corrupt" things to do.
Spying on and sabotaging weapons development of foreign adversaries is a completely normal government function
I wonder how many results got nerfed via https://en.wikipedia.org/wiki/Pentium_FDIV_bug before it was known about.
I’d be surprised if it were a lot. At that time (open to corrections) not a lot of scientific research was done on consumer intel platforms.
Obviously it was found by a mathematician, but I still suspect it wasn’t obvious in published research or that it ended up not causing significant enough deviations to cause research to revisit the calculations.
My team ran into some interesting but very small deviations when we moved our iterative solar wind model from 32 bit to 64 bit, but the changes weren’t significant enough to revisit or re-do prior research wholesale.
Like my team in the 2000s I suspect anyone who had data crunched by this bug also revisited it and either concluded it wasn’t significant enough or redid the work and it didn’t change the conclusions.
I am curious now if this bug was cited in any papers at the time to give a rough idea how aware or affected academics were.
None of the science being sabotaged was being published in peer reviewed journals was it? (besides the Portuguese hydrodynamic modeling stuff, but it could have been accidental or had other uses)
And yes, to be clear, I don’t consider it contributing to “science” if it’s not published, reviewed, and reproducible.
How about killing scientists and engineers? [1]
[1] https://en.wikipedia.org/wiki/Assassinations_of_Iranian_nucl...
Scientists and engineers also invented Zyklon-B gas and built the crematoriums in the concentration camps. Don’t underestimate what scientists and engineers can do to Jews.
The first thing I thought of was The 3-Body Problem series. If you've read the books (or watched the shows you'll know what I mean).
The submitted article appears to be an LLM summary of https://www.sentinelone.com/labs/fast16-mystery-shadowbroker...
No clue if the link that I posted is an AI summary. I also just found it somewhere.
But indeed many more details in the link you shared. Thanks for posting this!
I think LLMs would do a better job.
I was about to respond saying what a terrible article it was, as it reads as if the author has no idea what he was talking about. Attempting to paraphrase the original article would explain it.
I don't see how it can be an LLM summary of that page given that it mentions many things that your link doesn't.
Edit: Old link for those wondering, since it got changed: https://hackingpassion.com/fast16-pre-stuxnet-cyber-sabotage...
Such as?
Have you read both of them? There's a ton of stuff. "Advances in Civil Engineering", "TMSR-LF1", "Black Hat Asia"...
It appears to be a summary of both the official SentinelOne article, and this one:
https://www.theregister.com/2026/04/24/fast16_sabotage_malwa...
No, these aren't all mentioned there either: https://news.ycombinator.com/item?id=47914748
> This one did not destroy machines or blow things up. It corrupted the math.
This LLM style of writing has had it's day.
Thank you for finding this - the original is a really interesting article.
(@dang - consider re-pointing to this?)
Or to https://www.theregister.com/2026/04/24/fast16_sabotage_malwa...
The current article is hard to read
https://www.theregister.com/2026/04/24/fast16_sabotage_malwa...
This one has some additional details, based on a talk given by one of the authors.
Changed now from https://hackingpassion.com/fast16-pre-stuxnet-cyber-sabotage.... Thanks!
So that's why China still can't make ballpoint pens? /s