tux3 a month ago

See the public phab ticket: https://phabricator.wikimedia.org/T419143

In short, a Wikimedia Foundation account was doing some sort of test which involved loading a large number of user scripts. They decided to just start loading random user scripts, instead of creating some just for this test.

The user who ran this test is a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account, which has permissions to edit the global CSS and JS that runs on every page.

One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast. This triggered tons of alerts, until the decision was made to turn the Wiki read-only.

  • Ferret7446 a month ago

    This is a pretty egregious failure for a staff security engineer

    • ljm a month ago

      It's a pretty egregious failure for the org because it controlled the conditions for it to happen.

      The security guy is just the patsy because he actioned it.

      They have obviously done this a million times before and now they got burned.

      • sonofhans a month ago

        Yes, this. That same engineer shouldn’t have a pocket nuclear trigger shaped just like their key fob, either. Humans are predictable.

        • throwaway894345 a month ago

          Aren’t staff part of engineering leadership?

          • wmichelin a month ago

            At my job, I would just say they are in the ear of engineering leadership, but are not part of it.

            • throwaway894345 a month ago

              That makes sense. I guess I usually think of developing policies for this kind of thing to be pretty much what staff would do. I don’t usually expect the CTO to make decisions about how to do testing. To the extent the engineering leadership are to blame, it’s that they were the ones who hired/retained this guy. The buck ultimately stops with them to be sure, but making these kinds of policies seems within the remit of a staff eng.

    • greatgib a month ago

      As a staff, you don't even imagine what his salary is for screwing up like that.

      That being said, interesting to see how salaries skyrocketed over the years: https://meta.wikimedia.org/wiki/Wikimedia_Foundation_salarie... but not that much for engineering.

      • sehansen 25 days ago

        The highest non-severance number is $512,179 for the CEO in 2022. That's not particularly extreme. It's ~1/10 of what the Mozilla Foundation CEO makes.

      • BorisMelnik a month ago

        that's insane...I am not donating anymore (not that I gave that much.)

    • type0 a month ago

      With all their donation begging, nothing will change, they will still spend money on useless seminars and continue to underfund security by hiring low paid web amateurs to do the important work

    • mcmcmc a month ago

      Pretty much the definition of a “career limiting event”

      • modderation a month ago

        It's either a a Career Limiting Event, or a Career Learning event.

        In the case of a Learning event, you keep your job, and take the time to make the environment more resilient to this kind of issue.

        In the case of a Limiting event, you lose your job, and get hired somewhere else for significantly better pay, and make the new environment more resilient to this kind of issue.

        Hopefully the Wikimedia foundation is the former.

        • gorgoiler a month ago

          Realistically, there’s a third option which it would be glib to not consider: you lose your job, get hired somewhere else, and screw up in some novel and highly avoidable way because deep down you aren’t as diligent or detail-oriented as you think you are.

          • mock-possum a month ago

            This is the most likely outcome

        • the_af a month ago

          In the average real world, the staff engineer learns nothing, regardless of whether they get to lose or keep their job. Some time down the line, they make other careless mistakes. Eventually they retire, having learned nothing.

          This is more common than you'd think.

          • cjbgkagh a month ago

            I was able to run some stats at scale on this and people who make mistakes are more likely to make more mistakes, not less. Essentially sampling from a distribution of a propensity for mistakes and this dominated any sign of learning from mistakes. Someone who repeatedly makes mistakes is not repeatedly learning, they are accident prone.

            • jstanley a month ago

              My impression of mistakes was that they were an indicator of someone who was doing a lot of work. They're not necessarily making mistakes at a higher rate per unit of work, they just do more of both per unit of time.

              From that perspective, it makes sense that the people who made the most mistakes in the past will also make the most mistakes in the future, but it's only because the people who did the most work in the past will do the most work in the future.

              If you fire everyone who makes mistakes you'll be left only with the people who never make anything at all.

              • cjbgkagh a month ago

                In this case it was trivial to normalize for work done.

                It’s very human to want to be forgiving of mistakes, after all who has not made any mistakes, but there are different classes of mistakes made by all different types of people. If you make a mistake you are the same type of person, but if you are pulling from a distribution by sampling by those who have made mistakes you are biasing your sample in favor of those prone to making such mistakes. In my experience any effect of learning is much smaller than this initial bias.

            • Dibby053 a month ago

              Can you elaborate? What scale? What kind of mistakes? This sounds quite interesting.

              • cjbgkagh a month ago

                A decade of data from many hundreds of people, help desk type roll where all communication was kept, mostly chat logs and emails. Machine learning with manual validation. The goal was to put a dollar figure on mistakes made since the customers were much more likely to quit and never come back if it was our fault, but also many customers are nothing but a constant pain in the ass so it was important to distinguish who was right whenever there was a conflict.

                Mistakes made per call, like many things, were on a Pareto distribution, so 90% of the mistakes are made by 10% of the people. Identifying and firing those 10% made a huge difference. Some of the ‘mistakes’ were actually a result of corruption and they had management backing as management was enriching themselves at the cost of the company (a pretty common problem) so the initiative was killed after the first round.

                • schuyler2d a month ago

                  This sounds really interesting but possibly qualitatively different than programming/engineering where automated improvements/iterations are part of the job (and what's rewarded)

            • lolive a month ago

              What if you define a hard rule from this statistics that « you must fire anyone on error one »? Won’t your company be empty in a rather short timeframe? [or will be composed only of doingNothing people?]

              • cjbgkagh a month ago

                Why would you do that? You’re sampling from a distribution, a single sample only carries a small amount of information, repeat samples compound though.

            • Angostura a month ago

              Or they are working in a very badly designed system which consistently encourages them to make mistakes

      • xvector a month ago

        They'll be fine, recruiters don't look this stuff up and generally background checks only care about illegal shit.

      • radicaldreamer a month ago

        Nobody is going to know who did this, so probably not career limiting in any major way.

        • xeromal a month ago

          They named him in the support ticket linked here somewhere.

          > sbassett

    • pocksuppet a month ago

      [flagged]

      • adxl a month ago

        Is ok, the AI was going to replace them in a few weeks anyway.

  • londons_explore a month ago

    Didn't realise this was some historic evil script and not some active attacker who could change tack at any moment.

    That makes the fix pretty easy. Write a regex to detect the evil script, and revert every page to a historic version without the script.

    • jl6 a month ago

      Letting ancient evil code run? Have we learned nothing from A Fire Upon the Deep?!

      • varenc a month ago

        Link to the Prologue of Fire Upon the Deep: https://www.baen.com/Chapters/-0812515285/A_Fire_Upon_the_De...

        It's very short and from one of my favorite books. Increasingly relevant.

        • iugtmkbdfil834 a month ago

          I swear, I respect Vinge more and more based on how well he seems to understand human tendencies to plot some plausible trajectories for our civilization.

          • Nition a month ago

            There's a little throwaway thing in the book (or maybe it was in the prequel) that I always liked, re understanding human tendencies. They're still using Unix time, starting in Jan 1st 1970, but given that their culture is so space-travel-focused they assume the early humans set it to coincide with man's first trip to the moon.

            • duskwuff a month ago

              That's from the prequel, A Deepness in the Sky. (Which is also excellent.)

              • hinkley a month ago

                Deepness in the Sky is probably the first Sci Fi alien I read who didn't feel like a human wearing an alien suit.

                Fantasy sometimes does this better but usually with specific tropes.

                • aerique a month ago

                  If you liked that and you haven't read it yet, give "Dragon's Egg" by Robert L. Forward a read.

          • HoldOnAMinute a month ago

            I wish he could have seen the current state of GenAI. Several times in the book he talks about how the ship understands context clues and sarcasm, and that effective natural language translation requires near-sentience.

      • HoldOnAMinute a month ago

        "It was really just humans playing with an old library. It should be safe, using their own automation, clean and benign.

        This library wasn't a living creature, or even possessed of automation (which here might mean something more, far more, than human)."

      • 12_throw_away a month ago

        \(^O^)/ zones of thought mentioned \(^O^)/

        • HoldOnAMinute a month ago

          Do you remember the part where they built a machine in the Transcend that had to work at the Bottom of the Beyond?

          The other day I was using Claude for a task, but it occurred to me, what if Claude is unreachable.

          So, I told it to "encode your wisdom into this script in case you are not available"

          That was my own version of that

      • NBJack a month ago

        Legitimately listening to this book for the first time after a coworker recommended it. It's rapidly becoming one of my favorite books that balances the truly alien with the familiar just right.

        Not so ironically, it came up when we were discussing "software archeology".

      • monista a month ago

        Learning from fiction? Let's learn from the Dune then and start Butlerian jihad already.

      • edoceo a month ago

        I've only just heard of it. But, I already knew to not run random scripts under a privileged account. And thank you for the book suggestion - I'm into those kinds of tales.

      • xeromal a month ago

        I love that book

      • hinkley a month ago

        Army of Darkness?

        The Mummy?

    • observationist a month ago

      Are you sure? Are you $150 million ARR sure? Are you $150 million ARR, you'd really like to keep your job, you're not going to accidentally leave a hole or blow up something else, sure?

      I agree, mostly, but I'm also really glad I don't have to put out this fire. Cheering them on from the sidelines, though!

      • hinkley a month ago

        Honestly, since I'm never really in a position to see much of that money, at this point I'd be more concerned about my coworkers. And while that typically correlates with the amount of money you either have or receive, they're often out of balance one way or the other.

    • jacquesm a month ago

      True but it does say something that such a script was able to lie dormant for so long.

      • outofpaper a month ago

        Why would anyone test in production???!!!

        • ninth_ant a month ago

          Selecting the wrong environment in your test setup by mistake?

          I refuse to believe that someone on the security team intentionally tested random user scripts in production on purpose.

          • withinboredom a month ago

            Once you get big enough… there comes a point where you need to run some code and learn what happens when 100 million people hitting it at once looks like. At that scale, “1 in a million class bugs/race conditions” literally happen every day. You can’t do that on every PR, so you ship it and prepare to roll back if anything even starts to look fishy. Maybe even just roll it out gradually.

            At least, that’s how it worked at literally every big company I worked at so far. The only reason to hold it back is during testing/review. Once enough humans look at it, you release and watch metrics like a hawk.

            And yeah, many features were released this way, often gated behind feature flags to control roll out. When I refactored our email system that sent over a billion notifications a month, it was nerve wracking. You can’t unsend an email and it would likely be hundreds of millions sent before we noticed a problem at scale.

            • ninth_ant a month ago

              Yes this is a common release practice.

              However this is a different situation as we’re talking about running arbitrarily found third-party scripts. I can’t imagine that was ever intended to be done in production.

              Fun story, when I worked at Facebook in the earlier days someone accidentally made a change that effectively set the release flags for every single feature to be live on production. That was a day… we had to completely wipe out memcached to stop the broken features and then the database was hammered to all hell.

            • ghxst a month ago

              I would say you can get to this point far below 100 million people, especially on web. Some people are truly special and have some kind of setup you just can't easily reproduce. But I agree, you do really have to be confident in your ability to control rollout / blast radius, monitor and revert if needed.

          • irishcoffee a month ago

            > I refuse to believe that someone on the security team intentionally tested random user scripts in production on purpose.

            Do I have a bridge to sell you, oh boy

        • fifilura a month ago

          I have never heard of this kind of insane behaviour before.

        • HoldOnAMinute a month ago

          There are plenty of ways to safely test in production. For one thing you need to limit the scope of your changes.

        • kQq9oHeAz6wLLS a month ago

          "Everyone has a test environment. Some are lucky enough to have a separate production environment."

    • Melatonic a month ago

      Or just restore from backup across the board. Assuming they do their backups well this shouldn't be too hard (especially since its currently in Read Only mode which means no new updates)

  • cesarb a month ago

    > One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast.

    So, like the Samy worm? (https://en.wikipedia.org/wiki/Samy_%28computer_worm%29)

  • Fokamul a month ago

    I'm guessing, "1> Hey Claude, your script ran this malicious script!"

    "Claude> Yes, you're absolutely right! I'm sorry!"

  • davidd_1004 a month ago

    300 million dollar organization btw

    • aiiane a month ago

      aka tiny, relatively speaking, compared to similar sites with the same user base

  • karel-3d a month ago

    wait as a wikipedia user you can just put random JS to some settings and it will just... run? privileged?

    this is both really cool and really really insane

    • hk__2 a month ago

      Yes, you can have your own JS/CSS that’s injected in every page. This is pretty useful for widgets, editing tools, or to customize the website’s apparence.

      • karel-3d a month ago

        It sounds very dangerous to me but who am I to judge.

        • Brian_K_White a month ago

          It's nothing.

          For the global ones that need admin permissions to edit, it's no different from all the other code of mediawiki itself like the php.

          For the user scripts, it's no worse than the fact that you can run tampermonkey in your browser and have it modify every page from evry site in whatever way your want.

          • karel-3d a month ago

            Well it has just been shown it's not nothing

        • corndoge a month ago

          That is how Mediawiki works. Everything is a page, including CSS and JS. It is not really different than including JS in a webpage anywhere else.

        • bawolff a month ago

          It is kind of risky - you now have an entire, mostly unreviewed, ecosystem of javascript code, that users can experiment with.

          However its been really useful to allow power users to customize the interface to their needs. It also is sort of a pressure release for when official devs are too slow for meeting needs. At this point wikipedia has become very dependent on it.

        • hk__2 a month ago

          It only affects your user; it’s just like adding random extensions to your browser.

    • Ekaros a month ago

      Fundamentally I feel whole "web" as in anything running in browser is insane an broken security wise. When you allow mostly arbitrary code to run when you load a page... Well it can do mostly arbitrary things and everyone else needs to protect against it.

      And when you have enough rights, you get to add arbitrary code to everywhere on your site.

  • AlienRobot a month ago

    On one hand, I was about to get irrationally angry someone was attacking Wikipedia, so I'm a bit relieved

    On the other hand,

    >a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account

    seriously?

    • streetfighter64 a month ago

      To paraphrase Bush,

      > our enemies are innovative and resourceful, and so are we. They never stop thinking about new ways to harm our site and our users, and neither do we.

  • amai a month ago

    Why am I not surprised that the malicious script was from ruwiki?

  • mos87 a month ago

    >they decided to do this test under their

    what language is this?

  • cc-d a month ago

    this was us. we pumpin hard. we literall run, as in literally run the organisms of, senior wikimedia staff as well as employees

    this is just us playing on the computer, we got b0mbz

nhubbard a month ago

Wow. This worm is fascinating. It seems to do the following:

- Inject itself into the MediaWiki:Common.js page to persist globally, and into the User:Common.js page to do the same as a fallback

- Uses jQuery to hide UI elements that would reveal the infection

- Vandalizes 20 random articles with a 5000px wide image and another XSS script from basemetrika.ru

- If an admin is infected, it will use the Special:Nuke page to delete 3 random articles from the global namespace, AND use the Special:Random with action=delete to delete another 20 random articles

EDIT! The Special:Nuke is really weird. It gets a default list of articles to nuke from the search field, which could be any group of articles, and rubber-stamps nuking them. It does this three times in a row.

  • divbzero a month ago

    There doesn’t seem to be an ulterior motive beyond “Muahaha, see the trouble I can cause!”

    • batiudrami a month ago

      A classical virus, from the good old days. None of this botnet/bitcoin mining in the background nonsense.

      • mghackerlady a month ago

        I've always wanted to make a virus like those of the olden days. I wouldn't do anything malicious with it, but maybe I would deploy it to a friends computer if it wasn't very destructive. What resources are there to learn about viruses?

      • aerique a month ago

        On the Atari ST we had a boot sector virus that inverted the mouse Y-axis after some random time.

        So annoying.

    • creatonez a month ago

      No one actually knows what the payload from basemetrika.ru contains, though. So it's possible it was originally intended to be more damaging. But no matter what it would have caught attention super fast, so there's probably an upper limit to how sophisticated it could have been.

  • 256_ a month ago

    As someone on the Wikipediocracy forums pointed out, basemetrika.ru does not exist. I get an NXDomain response trying to resolve it. The plot thickens.

    • pKropotkin a month ago

      Yeah, basemetrika.ru is free now. Should we occupy it? ;)

      • acheong08 a month ago

        I registered it about 40 minutes ago, but it seems the DNS has been cached by everyone as a result of the wikipedia hack & not even the NS is propagating. Can't get an SSL certificate .

        • Imustaskforhelp a month ago

          I had looked into its availability too just out of curiosity itself before reading your comment on a provider, Then I read your comment. Atleast its taken in from the hackernews community and not a malicious actor.

          Do keep us updated on the whole situation if any relevant situation can happen from your POV perhaps.

          I'd suggest to give the domain to wikipedia team as they might know what could be the best use case of it if possible.

          • acheong08 a month ago

            Not quite sure which channels I should reach out via but I've put my email on the page so they can contact me.

            Based on timings, it seems that Wikipedia wasn't really at risk from the domain being bought as everything was resolved before NS records could propagate. I got 1 hit from the URL which would've loaded up the script and nothing since.

            • bawolff a month ago

              Its misinformation that the malicious script loaded that domain. The malicious script did have a url with that domain in it, but it wouldnt load javascript from it (possibly due to a programming mistake/misunderstanding by the author, its kind of unclear what the original intent was)

        • bjord a month ago

          nice work

      • Barbing a month ago

        Namecheap won’t sell it which is great because it made me pause and wonder whether it's legal for an American to send Russians money for a TLD.

        • throw-the-towel a month ago

          Namecheap is Ukrainian, of course they won't sell you a .ru domain.

          • craftkiller a month ago

            Is it? Wikipedia says:

            > Namecheap is a U.S. based domain name registrar and web hosting service company headquartered in Phoenix, Arizona.

            and in 2025 they were purchased by:

            > CVC Capital Partners plc is a Jersey-based private equity and investment advisory firm

            • mkl a month ago

              https://news.ycombinator.com/item?id=30504812

              Top comment is from the CEO and explains: "We have people on the ground in Ukraine being bombarded now non stop."

              • craftkiller a month ago

                I'm not questioning whether or not they have Ukrainian employees, I'm questioning the statement "Namecheap is Ukrainian". That post+comment does not address that. McDonalds has employees in Vietnam but McDonalds is not Vietnamese.

            • throw-the-towel a month ago

              I remember that in 2022 a sizeable part of their workforce was located in Ukraine. Too lazy to search for proof, sorry!

            • justsomehnguy a month ago

              It is. Just punch it's name in the search box down below.

      • 256_ a month ago

        I'm half-tempted to try and claim it myself for fun and profit, but I think I'll leave it for someone else.

        What should we put there, anyway?

      • amiga386 a month ago

        It means giving money to the Russian government, so no.

        If anyone from the Russian government is reading this, get the fuck out of Ukraine. Thank you.

        • dwedge a month ago

          Well done, it's finally over

          • amiga386 a month ago

            Thanks! For my next trick, I'll solve systemic racism by turning my logo black for a month.

            • dwedge a month ago

              Make sure you support LGBT rights by superimposing a rainbow over your rainbow, but only in the countries where LGBT people already have rights - it would be bad for business to do it in those other countries.

        • SanjayMehta a month ago

          "In 2023, the United States imported U3O8 and equivalents primarily from Canada, Australia, Russia, Kazakhstan, and Uzbekistan. The origin of U3O8 used in U.S. nuclear reactors could change in the coming years. In May 2024, the United States banned imports of uranium products from Russia beginning in August, although companies may apply for waivers through January 1, 2028."

          https://www.eia.gov/todayinenergy/detail.php?id=64444

        • cryptoegorophy a month ago

          [flagged]

          • Rendello a month ago

            If anyone is genuinely curious about this, they were indeed letting Russian gas through and stopped in 2025:

            > On 1 January 2025, Ukraine terminated all Russian gas transit through its territory, after the contract between Gazprom and Naftohaz signed in 2019 expired. [...] It is estimated that Russia will lose around €5bn a year as a result.

            https://en.wikipedia.org/wiki/Russia%E2%80%93Ukraine_gas_dis...

          • yenepho a month ago

            You must be fun at parties

            • bregma a month ago

              They're a ... gas.

            • DaSHacka a month ago

              More fun than GP lol

        • INR18650 a month ago

          [flagged]

          • avidruntime a month ago

            I don't think voting with your wallet constitutes virtue signaling, especially at a time when end user boycotting is one of the universally known methods of protest.

            • janalsncm a month ago

              I am a pragmatist so maybe I will never understand this line of thinking. But in my mind, there are no perfect options, including doing nothing.

              By doing nothing, you are allowing a malicious actor to buy the domain. In fact I am sure they would love for everyone else to be paralyzed by purity tests for a $1 domain.

              All things being equal, yeah don’t buy a .ru domain. But they are not equal.

  • bawolff a month ago

    > Vandalizes 20 random articles with a 5000px wide image and another XSS script from basemetrika.ru

    Note while this looks like its trying to trigger an xss, what its doing is ineffective, so basemetrika.ru would never get loaded (even ignoring that the domain doesnt exist)

  • dheera a month ago

    Wouldn't be surprised if elaborate worms like this are AI-designed

    • nhubbard a month ago

      I wouldn't be surprised either. But the original formatting of the worm makes me think it was human written, or maybe AI assisted, but not 100% AI. It has a lot of unusual stylistic choices that I don't believe an AI would intentionally output.

      • creatonez a month ago

        > It has a lot of unusual stylistic choices that I don't believe an AI would intentionally output.

        Indeed. One of those unusual choices is that it uses jQuery. Gotta have IE6 compatibility in your worm!

        I'm not sure what to make of `Number("20")` in the source code. I would think it's some way to get around some filter intended to discourage CPU-intensive looping, but I don't think user scripts have any form of automated moderation, and if that were the case it doesn't make sense that they would allow a `for` loop in the first place.

        • dheera a month ago

          jQuery is still sooo much easier to use than React and whatever other messes modern frameworks have created. As a bonus, you don't have to npm build your JS project, you just double click and it opens and works without any build step, which is how interpreted languages were intended to be.

    • integralid a month ago

      I would. AI designed software in general does not include novel ideas. And this is the kind of novel software AI is not great at, because there's not much training data.

      Of course it's very possible someone wrote it with AI help. But almost no chance it was designed by AI.

      • bawolff a month ago

        Almost certainly not AI due to the age of when it was written. However its a very simple script. I think its certainly within the realm of AI to write a short script that makes a few api requests.

    • streetfighter64 a month ago

      Turns out it's a pretty rudimentary XSS worm from 2023. If all you have is a hammer, everything looks like a nail; if all you have is a LLM, everything looks like slop?

    • idiotsecant a month ago

      I mean....elaborate is a stretch.

wikiperson26 a month ago

A theory on phab: "Some investigation was made in Russian Wikipedia discord chat, maybe it will be useful.

1. In 2023, vandal attacks was made against two Russian-language alternative wiki projects, Wikireality and Cyclopedia. Here https://wikireality.ru/wiki/РАОрг is an article about organisators of these attacks.

2. In 2024, ruwiki user Ololoshka562 created a page https://ru.wikipedia.org/wiki/user:Ololoshka562/test.js containing script used in these attacks. It was inactive next 1.5 years.

3. Today, sbassett massively loaded other users' scripts into his global.js on meta, maybe for testing global API limits: https://meta.wikimedia.org/wiki/Special:Contributions/SBasse... . In one edit, he loaded Ololoshka's script: https://meta.wikimedia.org/w/index.php?diff=prev&oldid=30167... and run it."

  • orbital-decay a month ago

    I remember someone mass-defacing the ruwiki almost exactly a year ago (March 3 2025) with some immature insults towards certain ruwiki admins. If I'm not mistaken it was a similar method.

    • Lockal a month ago

      No, I think you are mixing something.

      - There are constant deface incidents caused by editing of unprotected / semiprotected templates

      - There were incidents of UI mistranslation (because MediaWiki translation is crowdsourced)

      - The attack that was applied is well know in Russian community, it is pretty much standard "admin-woodpecker". The standard woodpecker (some people call it neo-woodpecker) renamed all pages with a high speed (I know this since 2007, the name woodpecker appeared many years later); then MediaWiki added throttling for renames; then neo-woodpecker reappeared in different years (usually associated with throttling bypass CVEs). Early admin-woodpeckers were much more destructive (destroyed a dozens of mediawiki websites due to lack of backups). Nuking admin woodpecker it quite a boring one, but I think (I hope) there are some AbuseFilter guardrails configured to prevent complex woodpeckers.

      - The attack initiator is 100% a well known user; there are not too many users who applied woodpecker in the first place; not too many "upyachka" fans (which indicates that user edited before 2010 - back then active editors knew each other much better). But it is quite pointless to discuss who exactly the initiator is.

      - Wikireality page is hijacked by a small group and does not represent the reality.

Kiboneu a month ago

> Cleaning this up is going to be an absolute forensic nightmare for the Wikimedia team since the database history itself is the active distribution vector.

Well, worm didn't get root -- so if wikimedia snapshots or made a recent backup, probably not so much of a nightmare? Then the diffs can tell a fairly detailed forensic story, including indicators of motive.

Snapshotting is a very low-overhead operation, so you can make them very frequently and then expire them after some time.

  • Extropy_ a month ago

    Even if they reset to several days ago and lose, say, thousands of edits, even tens of thousands of minor edits, they're still in a pretty good place. Losing a few days of edits is less-than-ideal but very tolerable for Wikipedia as a whole

    • tetha a month ago

      At $work we're hosting business knowledge databases. Interestingly enough, if you need to revert a day or two of edits, you're better off to do it asap, over postponing and mulling over it. Especially if you can keep a dump or an export around.

      People usually remember what they changed yesterday and have uploaded files and such still around. It's not great, but quite possible. Maybe you need to pull a few content articles out from the broken state if they ask. No huge deal.

      If you decide to roll back after a week or so, editors get really annoyed, because now they are usually forced to backtrack and reconcile the state of the knowledge base, maybe you need a current and a rolled-back system, it may have regulatory implications and it's a huge pain in the neck.

      • canpan a month ago

        I preach to everyone to fail as loudly as possible and as fast as possible. Don't try to "fix" unknown errors in code. It often catches fresh graduates off guard. If you fail very loud and fast most issues will be found asap and fixed.

        I had to help out a team in the cleanup of a bug that corrupted some data silently for a while before being found. It was too long out to roll back and they needed all help to identify what was real or wrong data.

    • Kiboneu a month ago

      Nah, you can snapshot every 15 minutes. The snapshot interval depends on the frequency of changes and their capacity, but it's up to them how to allocate these capacities... but it's definitely doable and there are real reasons for doing so. You can collapse deltas between snapshots after some time to make them last longer. I'd be surprised if they don't do that.

      As an aside, snapshotting would have prevented a good deal of horror stories shared by people who give AI access to the FS. Well, as long as you don't give it root.......

      • john_strinlai a month ago

        >Nah, you can snapshot every 15 minutes.

        obviously you can. but, what is the actual snapshot frequency? like, what is the timestamp of the last known good snapshot? that is what matters.

        in any case, the comment you are replying to is a hypothetical, which correctly points out that even a day or two of lost edits is fine (not ideal, but fine). your reply doesnt engage with their comment at all.

        • Kiboneu a month ago

          > the comment you are replying to is a hypothetical, which correctly points out that even a day or two of lost edits is fine (not ideal, but fine). your reply doesnt engage with their comment at all.

          I did engage, by pointing out that it wasn't relevant nor a realistic scenario for a competent sysadmin. (Did you read the OP?) That's a /you/ problem if you rely on infrequent backups, especially for a service with so much flux.

          > what is the actual snapshot frequency? like, what is the timestamp of the last known good snapshot?

          ? Why would I know what their internal operations are?

          • john_strinlai a month ago

            >I did engage, by pointing out that it wasn't relevant nor a realistic scenario for a competent sysadmin.

            >Why would I know what their internal operations are?

            i mean... you must, right? you know that once-a-day snapshots is not relevant to this specific incident. you know that their sysadmins are apparently competent. i just assumed you must have some sort of insider information to be so confident.

            • Kiboneu a month ago

              I think you are misreading my comments and made a bad assumption. The reason I'm confident is because this has been my bread and butter for a decade.

              • john_strinlai a month ago

                >The reason I'm confident is because this has been my bread and butter for a decade.

                my decade of dealing with incompetent sysadmins and broken backups (if they even exist) has given me the opposite of confidence.

                but im glad you have had a different experience

                • Kiboneu a month ago

                  > my decade of dealing with incompetent sysadmins and broken backups (if they even exist) has given me the opposite of confidence.

                  Oh, I agree that the average bar is low. That's part of the reason I do it all myself.

                  The heuristic with wikimedia is that they've been running a PHP service that accepts and stores (anonymous) input for 25 years. The longetivity with the risk exposure that they have are indicators that they know what they are doing, and I'm sure they've learned from recovering all sorts of failures over the years.

                  Look at how quickly it was brought back up in this instance!

                  So, yeah. I don't think initial hypothetical counterpoint holds water, and that's what I have been pointing out.

                  • jibal a month ago

                    Kudos for very polite responses to trolling.

                    • john_strinlai a month ago

                      no one is trolling in this comment chain.

                      i found kibone's reply to a hypothetical musing as if it was some counterpoint in a debate instead of a simple expansion on their comment to be off putting. we had some comments back and forth and we both came out of it just fine. weird of you to add on this little insult to an otherwise pretty normal exchange.

                      • Kiboneu a month ago

                        FWIW I did not assume that you were trolling, and yes we did come out fine.

                    • Kiboneu a month ago

                      I have good faith, though I should get off hn now... :P

                      I still don't need to assume what the intent is. Troll or no troll, it works. My comments might inspire someone else to try a CoW fs. I'm also really impressed with wikimedia's technical team.

      • sobjornstad a month ago

        Nowadays I refuse to do any serious work that isn't in source control anywhere besides my NAS that takes copy-on-write snapshots every 15 minutes. It has saved my butt more times than I can count.

        • Kiboneu a month ago

          Yeah same here. Earlier I had a sync error that corrupted my .git, somehow. no problem; I go back 15 minutes and copy the working version.

          Feels good to pat oneself in the back. Mine is sore, though. My E&O/cyber insurance likes me.

      • gchamonlive a month ago

        The problem isn't the granularity of the backup but since the worm silently nukes pages, it's virtually impossible to reconcile the state before the attack and the current state, so you have to just forfeit any changes made since then and ask the contributors to do the leg work of reapplying the correct changes

        • Kiboneu a month ago

          Why would nuked pages matter? Snapshots capture everything and are not part of wikimedia software.

          • gchamonlive a month ago

            The nuke might be legitimate?

            • wizzwizz4 a month ago

              That's not a lot of state lost. Destructive operations are easier to replay than constructive ones.

              • gchamonlive a month ago

                Is Wikimedia overreacting then?

                • wizzwizz4 a month ago

                  No: from what I can tell, they're being conservative, which is appropriate here. Once you've pushed the "stop bad things happening" button, there's no need to rush.

  • bawolff a month ago

    Nothing was rolled back in the db sense, i think people just used normal wiki revert tools.

    It also never effected wikipedia, just the smaller meta site (used for interproject coordination)

  • hinkley a month ago

    I wonder if the bad traffic overwhelmed the good traffic enough that it's simpler to pick out some of the good traffic from the bad and replay it rather than spot all of the bad traffic.

varun_ch a month ago

Woah this looks like an old school XSS worm https://meta.wikimedia.org/wiki/Special:RecentChanges?hidebo...

I’ve always thought the fact that MediaWiki sometimes lets editors embed JavaScript could be dangerous.

  • varun_ch a month ago

    Also, I’m also surprised an XSS attack like hasn’t yet been actually used to harvest credentials like passwords through browser autofill[0].

    It seems like the worm code/the replicated code only really attacks stuff on site. But leaking credentials (and obviously people reuse passwords across sites) could be sooo much worse.

    [0] https://varun.ch/posts/autofill/

    • hrmtst93837 a month ago

      I think autofill-based credential harvesting is harder than it sounds because browsers and password managers treat saved credentials as a separate trust boundary, and every vendor implements different heuristics. The tricky part is getting autofill to fire without a real user gesture and then exfiltrating values, since many browsers require exact form attributes or a user activation and several managers ignore synthetic events.

      If an attacker wanted passwords en masse they could inject fake login forms and try to simulate focus and typing, but that chain is brittle across browsers, easy to detect and far lower yield than stealing session tokens or planting persistent XSS. Defenders should assume autofill will be targeted and raise the bar with HttpOnly cookies, SameSite=strict where practical, multifactor auth, strict Content Security Policy plus Subresource Integrity, and client side detection that reports unexpected DOM mutations.

    • stephbook a month ago

      Chrome doesnt actually autofill before you interact. It only displays what it would fill in at the same location visually.

      • varun_ch a month ago

        but any interaction is good for Chrome, like dismissing a cookie banner

    • af78 a month ago

      Time to add 2FA...

greyface- a month ago
devmor a month ago

In the early 2010’s I worked for a company whose primary income was subscriptions to site protection services - one of which included cleaning up malware-infected Wordpress installations. I worked on the team that did this job.

This exact type of database-stored executable javascript was one of the most annoying types of infections to clean up.

  • 0xWTF a month ago

    Ok, so there are tons of mediawiki installations all over the internet. What do these operators do? Set their wikis to read-only mode, hang tight, and wait for a security patch?

    Also, does this worm have a name?

    • bawolff a month ago

      There is nothing to do, the incident was not caused by a vulnerability in mediawiki.

      Basically someone who had permissions to alter site js, accidentally added malicious js. The main solution is to be very careful about giving user accounts permission to edit js.

      [There are of course other hardening things that maybe should be done based on lessons learned]

      • dboreham a month ago

        There are already tools and techniques to validate served JS is as-intended, and these techniques could be beefed up by adding browser checks. I've been surprised these haven't been widely adopted given the spate of recent JS-poisoning attacks.

        • bawolff a month ago

          You mean like SRI? That's not really what happened here, so its not really relavent.

      • streetfighter64 a month ago

        Well, admins (or anybody other than the developers / deployment pipeline) having permissions to alter the JS sounds like a significant vulnerability. Maybe it wasn't in the early 2000s, but unencrypted HTTP was also normal then.

        • bawolff a month ago

          That's a fair point, but keep in mind normal admin is not sufficient. For local users (the account in question wasn't local) you need to be an "interface admin", of which there are only 15 on english wikipedia.

          The account in question had "staff" rights which gave him basically all rights on all wikis.

          • cesarb a month ago

            > For local users (the account in question wasn't local) you need to be an "interface admin", of which there are only 15 on english wikipedia.

            It used to be all "admin" accounts, of which there were many more. Restricting it to "interface admin" only is a fairly recent change.

            • bawolff a month ago

              > Restricting it to "interface admin" only is a fairly recent change.

              Its been 8 years!

        • LaGrange a month ago

          > Well, admins (or anybody other than the developers / deployment pipeline) having permissions to alter the JS sounds like a significant vulnerability.

          It's a common feature of CMS'es and "tag management systems." Its presence is a massive PITA to developers even _besides_ the security, but PMs _love them_, in my experience.

infinitewars a month ago

A comment from my wiki-editor friend:

  "The incident appears to have been a cross-site scripting hack. The origin of rhe malicious scripts was a userpage on the Russian Wikipedia. The script contained Russian language text.

  During the shutdown, users monitoring [https://meta.wikimedia.org/wiki/special:RecentChanges Recent changes page on Meta] could view WMF operators manually reverting what appeared to be a worm propagated in common.js

  Hopefully this means they won't have to do a database rollback, i.e. no lost edits. "
Interesting to note how trivial it is today to fake something as coming "from the Russians".
  • Lockal a month ago

    Why do you think it was faked? It is a well known Russian tech (woodpecker), the earliest version I can find now was created in 2013 (but I personally saw it in 2007), it is a well known Russian damocles sword against misconfigured MediaWiki websites.

Wikipedianon a month ago

This was only a matter of time.

The Wikipedia community takes a cavalier attitude towards security. Any user with "interface administrator" status can change global JavaScript or CSS for all users on a given Wiki with no review. They added mandatory 2FA only a few years ago...

Prior to this, any admin had that ability until it was taken away due to English Wikipedia admins reverting Wikimedia changes to site presentation (Mediaviewer).

But that's not all. Most "power users" and admins install "user scripts", which are unsandboxed JavaScript/CSS gadgets that can completely change the operation of the site. Those user scripts are often maintained by long abandoned user accounts with no 2 factor authentication.

Based on the fact user scripts are globally disabled now I'm guessing this was a vector.

The Wikimedia foundation knows this is a security nightmare. I've certainly complained about this when I was an editor.

But most editors that use the website are not professional developers and view attempts to lock down scripting as a power grab by the Wikimedia Foundation.

  • gucci-on-fleek a month ago

    > Any user with "interface administrator" status can change global JavaScript or CSS for all users on a given Wiki with no review.

    True, but there aren't very many interface administrators. It looks like there are only 137 right now [0], which I agree is probably more than there should be, but that's still a relatively small number compared to the total number of active users. But there are lots of bots/duplicates in that list too, so the real number is likely quite a bit smaller. Plus, most of the users in that list are employed by Wikimedia, which presumably means that they're fairly well vetted.

    [0]: https://en.wikipedia.org/w/api.php?action=query&format=json&...

  • RGamma a month ago

    Seems like a good time to donate one's resources to fix it. The internet is super hostile these days. If Wikipedia falls... well...

    • Wikipedianon a month ago

      It's a political issue. Editors are unwilling or unable to contribute to development of the features they need to edit.

      Unfortunately, Wikipedia is run on insecure user scripts created by volunteers that tend to be under the age of 18.

      There might be more editors trying to resume boost if editing Wikipedia under your real name didn't invite endless harassment.

    • tick_tock_tick a month ago

      Wikipedia doesn't even spend donation of Wikipedia anymore.

    • logophobia a month ago

      Sounds more like a political issue this. Can't buy your way out of that.

    • PsylentKnight a month ago

      My understanding is that Wikipedia receives more donations than they need, surely they have the resources to fix it themselves?

      • noosphr a month ago

        You would first need to realzie it's a problem.

        • krater23 a month ago

          Maybe this is the reason for this worm. Someone is angry because they don't got it in another way...

          • jibal a month ago

            The worm is a two year old script from the Russian Wiki that was grabbed randomly for a test by a stupid admin running unsandboxed with full privileges, so no.

  • bawolff a month ago

    > Prior to this, any admin had that ability until it was taken away due to English Wikipedia admins reverting Wikimedia changes to site presentation (Mediaviewer).

    You're mixing up events. Superprotect is unrelated to the IAdmin separation from normal admin. The two are separated by many years and basically totally unrelated.

    I agree with the rest of your post.

  • _verandaguy a month ago

        > Based on the fact user scripts are globally disabled now I'm guessing this was a vector.
    
    Disabled at which level?

    Browsers still allow for user scripts via tools like TamperMonkey and GreaseMonkey, and that's not enforceable (and arguably, not even trivially visible) to sites, including Wikipedia.

    As I say that out loud, I figure there's a separate ecosystem of Wikipedia-specific user scripts, but arguably the same problem exists.

    • howenterprisey a month ago

      Yeah, wikipedia has its own user script system, and that was what was disabled.

    • karel-3d a month ago

      This is apparently not done browser side but server side.

      As in, user can upload whatever they wish and it will be shown to them and ran, as JS, fully privileged and all.

    • Wikipedianon a month ago

      The sitewide JavaScript/CSS is an editable Wiki page.

      You can also upload scripts to be shared and executed by other users.

  • chris_wot a month ago

    [flagged]

    • alphager a month ago

      Most admins on Wikipedia are competent in areas outside of webdev and security.

      • chris_wot a month ago

        No, most admins are incompetent, full stop. I've been on the receiving end.

    • formerly_proven a month ago

      Wikipedia admins are not IT admins, they're more like forum moderators or admins on a free phpBB 2 hosting service in 2005. They don't have "admin" access to backend systems. Those are the WMF sysadmins.

      • Wikipedianon a month ago

        This is half true, because Wikipedia admins had the ability to edit sitewide JavaScript until 2018.

        A certain number of "community" admins maintain that right to this day after it was realized this was a massive security hole.

        • Gander5739 a month ago

          You mean interface admins?

j45 a month ago

Too much app logic in the client side (Javascript) has always been an attack vector. The more that can reasonably be server side, the more that can't be seen.

  • dns_snek a month ago

    The amount of javascript is really beside the point here. The problem is that privileged users can easily edit the code without strong 2FA, allowing automatic propagation.

    • j45 a month ago

      It's not, application logic exposed on the client side is always an attack vector for figuring out how it works and how attack vectors could be devised.

      It's simply a calculated risk.

      How much business and application logic you put in your Javascript is critical.

      On your second unrelated comment about Wikipedia needing to use 2FA, there's probably a better way to do it and I hope mediawiki can do it.

      • dns_snek a month ago

        I don't know what you mean by application logic being exposed client-side. To change the content on the website, nuke articles, and propagate the malicious JS code you need to hijack privileged users' credentials and use them to trigger server-side actions.

        It doesn't matter how much functionality the JS was originally responsible for, it could've been as little as updating a clock, validating forms, or just some silly animation. Once that JS executes in your browser it has access to your cookies and local storage, which means it can trigger whichever server-side actions it wants.

        My second comment is not unrelated. The root cause of this mess is the fact that JS can be edited by privileged users without an approval process. If every change to the JS code required the user to enter their 2FA code (TOTP, let's say) then there would be no way for the worm to spread whenever users visited a page.

        • j45 a month ago

          Ah, I’m not speaking about JavaScript within the content of wikipedia as you are.

          I’m referring to the use of JavaScript in general in the building of web apps themselves. My comment is the same about 2FA.

          I’m making these comments from the general perspective because I see it as a security risk when front end scriptability and app logic are more available than say server side apps.

          Hope that clarifies my comments.

    • shevy-java a month ago

      How does 2FA prevent this here?

      • dns_snek a month ago

        If they required 2FA every time you wanted to modify JS then it couldn't propagate automatically. Just requiring 2FA when you first log in wouldn't help, of course.

        • msla a month ago

          More to the point, if they required 2FA every time you tried to modify the JS, nobody would do it because it would be too annoying. "Username, password... oh, the 2FA just timed out, gotta wait for the next one... what, that doesn't work? Does it want the old one? Oh... now it wants the next one... just a second... "

        • j45 a month ago

          2FAs also may require a level of KYC that Wikipedia isn't after and advocating for 2FA might indirectly advocate for a lot more things than just 2FA.

          • dns_snek a month ago

            KYC? I'm talking about standard 2FA methods like Time-based OTP codes.

        • zelphirkalt a month ago

          But only one person needs to authenticate to edit. The code will still run for everyone loading it.

tantalor a month ago

Nice to see jQuery still getting used :)

lifeisstillgood a month ago

I completely understand marking the software that controls drinking water as critical infrastructure- but at some point a state based cyber attack that just wipes wikipedia off the net is deeply damaging to our modern society’s ability to agree on common facts …

Just now thought “if Wikipedia vanished what would it mean … and it’s not on the level of safe drinking water, but it is a level.

  • GuB-42 a month ago

    > if Wikipedia vanished what would it mean …

    That someone would need to restore some backups, and in the meantime, use mirrors.

    Seriously, not that big of a deal. I don't know how many copies of Wikipedia are lying around but considering that archives are free to download, I guess a lot. And if you count text-only versions of the English Wikipedia without history and talk pages, it is literally everywhere as it is a common dataset for natural language processing tasks. It is likely to be the most resilient piece of data of that scale in existence today.

    The only difficulty in the worst case scenario would be rebuilding a new central location and restarting the machinery with trusted admins, editors, etc... Any of the tech giants could probably make a Wikipedia replacement in days, with all data restored, but it won't be Wikipedia.

  • lyu07282 a month ago

    There are so many mirrors anyway and trivial to get a local copy? What is much more concerning is government censorship and age verification/digital id laws where what articles you read becomes part of your government record the police sees when they pull you over.

  • __turbobrew__ a month ago

    You can download the entirety of wikipedia and store it in your own offline immutable backup.

    • mrguyorama a month ago

      The dump of english wikipedia is 26gb compressed and completely usable with that compressed format plus a small index file.

      That's small enough to live on most people's phones. It's small enough to be a single BluRay. Maybe Wikipedia should fund some mass printings.

      What you do not get however is any media. No sounds, images, videos, drawings, examples, 3D artifacts, etc etc etc. This is a huge loss on many many many topics.

  • tempaccount5050 a month ago

    What you're suggesting is literally impossible. There are plenty of mirrors and random people that download the thing in its entirety. The entire planet would have to be nuked for that to be possible.

  • xandrius a month ago

    Don't worry, I personally have an offline backup of the English on my phone.

  • Aperocky a month ago

    All persistent data should have backup.

    It's not a high bar.

  • CaptainNegative a month ago

    > but at some point a state based cyber attack that just wipes wikipedia off the net is deeply damaging to our modern society’s ability to agree on common facts

    Haven't we hit that point already with bad faith (and potentially government-run) coordinated editing and voting campaigns, as both Wales and Sanger have been pointing out for a while now?

    See, for example,

    * Sanger: https://en.wikipedia.org/wiki/User:Larry_Sanger/Nine_Theses

    * Wales: https://en.wikipedia.org/wiki/Talk:Gaza_genocide/Archive_22#...

    * PirateWires: https://www.piratewires.com/p/how-wikipedia-is-becoming-a-ma...

    • wizzwizz4 a month ago

      > Haven't we hit that point already with bad faith (and potentially government-run) coordinated editing […] campaigns,

      Yes, this is a real phenomenon. See, for instance, https://en.wikipedia.org/wiki/Timeline_of_Wikipedia%E2%80%93...: the examples from 2006 are funny, and the article's subject matter just gets sadder and sadder as the chronology goes on.

      > and voting campaigns

      I'm not sure what you mean by this. Wikipedia is not a democracy.

      > as both Wales and Sanger have been pointing out

      {{fv}}. Neither of those essays make this point. The closest either gets is Sanger's first thesis, which misunderstands the "support / oppose" mechanism. Ironically, his ninth thesis says to introduce voting, which would create the "voting campaign" vulnerability!

      These are both really bad takes, which I struggle to believe are made in good faith, and I'm glad Wikipedians are mostly ignoring them. (I have not read the third link you provided, because Substack.)

      • yorwba a month ago

        That Wikipedia is not a democracy doesn't mean there are no votes and no elections. https://en.wikipedia.org/wiki/Wikipedia:Administrator_electi...

        • wizzwizz4 a month ago

          That's a relatively recent process: there have only been 3 such elections ever. They have measures in place to try to curb abuse of the process, and it cannot really be used to introduce bias (since an administrator exhibiting bias would leave a public trail of evidence attesting to that bias). That said, thanks for letting me know about it.

  • streetfighter64 a month ago

    If you're using wikipedia to "agree on common facts" I think you might have bigger problems...

    • hnfong a month ago

      Not the GP, and I don't believe in the existence of "common facts" in general, but Wikipedia is indeed a good place to figure out what other people might agree as common facts...

      • streetfighter64 a month ago

        Well, I'm not sure either what the term "common facts" is supposed to mean, but wikipedia is not a good place to look for what "other people" think, unless if by "other people" you mean a small set of wikipedia powerusers. Just like traditional newspapers are controlled by a small set of editors who decide what's worth publishing, so is wikipedia.

        https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_no...

mafriese a month ago

I’m not saying that this is related to Wikipedia ditching archive.is but timing in combination with Russian messages is at least…weird.

  • worksonmine a month ago

    And they probably used mind-control to make the admin run random userscripts on his privileged account as well, the capabilities of russian hackers is scary.

    /s

    It is just another human acting human again.

    • tonymet a month ago

      Admin tasks are public in phabricator so it would be trivial to review chores and place malware in the chore's scope

      • worksonmine a month ago

        Which only makes it that much more important to review everything you're running with a privileged account, right?

        And if it really is as trivial as you say it should be fixed ASAP.

        • tonymet a month ago

          I mean it's trivial for any attacker to discover admin tasks and know where to place malicious code for the admin tasks to execute it.

pixl97 a month ago

>Cleaning this up

Find the first instance and reset to the backup before then. An hour, a day, a week? Doesn't matter that much in this case.

  • bbor a month ago

    It is true that they have a particularly robust, distributed backup system that can/has come in handy, but FWIW the timing matters to them. English Wikipedia receives ~2 edits per second, or 172,800 per day. Many of them are surely minor and/or automated, but still: 1,036,800 lost edits is a lot!

    • shevy-java a month ago

      Are they really lost though? I think they should not be lost; they could be stored in a separate database additionally.

      • derefr a month ago

        In fact, as long as the malware is just doing deletes, you can just merge the two "timelines" by restoring the snapshot and then replaying all the edits but ignoring the deletes. Lost deletes really aren't much of a problem!

    • Kiboneu a month ago

      Filesystem & database snapshots are very cheap to make, you can make them every 15 minutes. You can expire old snapshots (or collapse the deltas between them) depending on the storage requirements.

      • squeaky-clean a month ago

        That doesn't really matter though against an attack that takes some time to spread. If the attack was active for let's say, 6 hours, then 43,000 legitimate edits happened in between the last "clean" snapshot and the discovery of the attack. If you just revert to the last clean snapshot you lose those legitimate edits.

clcaev a month ago

We should be using federated organizational architectures when appropriate.

For Wikipedia, consider a central read-only aggregated mirror that delegates the editorial function to specialized communities. Common, suggested tooling (software and processes) could be maintained centrally but each community might be improved with more independence. This separation of concerns may be a better fit for knowledge collection and archival.

Note: I edited to stress central mirroring of static content with delegation of editorial function to contributing organizations. I'm expressly not endorsing technical "dynamic" federation approaches.

  • brcmthrowaway a month ago

    Exactly. Wikipedia should be used on ipfs

i_think_so a month ago

> Hitting MediaWiki:Common.js is the absolute nightmare scenario for MediaWiki deployments because that script gets executed by literally every single visitor

...except for us security wonks who have js turned off by default, don't enable it without good reason, disable it ASAP, and take a dim view of websites that require it.

Not too many years ago this behavior was the domain of Luddites and schizophrenics. Today it has become a useful tool in the toolbox of reasonable self-defense for anybody with UID 0.

Perhaps the WMF should re-evaluate just how specialsnowflake they think their UI is and see if, maybe just maybe, they can get by without js. Just a thought.

  • bbor a month ago

    It warms my heart that there's basically a 0% chance that they ever approach this camp's viewpoint based on the Herculean effort it took to switch over to a slightly more modern frontend a few years back. I'm glad you don't think of yourself of a Luddite, but I think you're vastly overstating how open people are to a purely-static web.

    Also, FWIW: Wikipedia is "specialsnowflake". If it isn't, that's merely because it was so specialsnowflake that there's now a healthy of ecosystem of sites that copied their features! It's far, far more capable than a simple blog, especially when you get into editing it.

    • i_think_so a month ago

      Ok, fair point. I presumed that this crowd would be far more familiar with the capabilities of HTML5 and dynamic pages sans js than most. (Surely more familiar than I, who only dabble in code by comparison.)

      No, I'm not suggesting we all go back to purely-static web pages, imagemap gifs and server side navigation. But you're going to have a hard time convincing me that I really truly need to execute code of unknown provenance in my this-app-does-everything-for-me process just to display a few pages of text and 5 jpegs.

      And for the record, I've called myself a Technologist for almost 30 years now. If I were a closet Luddite I'd be one of the greatest hypocrites of human history. :-)

      • pabs3 a month ago

        I think the Luddites were Technologists too, and that put them in the best position to understand the downsides of tech. Same goes for you.

        • i_think_so a month ago

          Big thanks for the recognition. Going against the hype colossus makes one feel like a lone voice in the wilderness.

          • pabs3 a month ago

            You're not alone, there are dozens of us left :)

    • zelphirkalt a month ago

      It would not have hurt to make a version of wikipedia, that will work without JS for the most part, including all that is important. However, that requires a mindset for supporting static pages, which is mostly what W should consist of, and would require a skill set, that is not so common among web developers these days. Such a static version would be much easier to test as well, since all the testing framework would need to do is simple requests, instead of awaiting client-side JS execution resulting in mutation of content on the page.

  • cesarb a month ago

    > and see if, maybe just maybe, they can get by without js.

    Unless it changed recently (it's too slow right now for me to check), Wikipedia has always worked perfectly fine without JS; that includes even editing articles (using the classic editor which shows the article markup directly, instead of the newer "visual" editor).

    Edit: I just checked, and indeed I can still open the classic edit page even with JS blocked.

tonymet a month ago

It's Wikipedia's 25th birthday but their security discipline is still very much circa 2001. No code signing, BOM / supply chain security. Only recently activated 2fa for admins (after another breach). Most admins are anons.

Let's hope they allocate more of the $200M+ / year to security infra.

Dwedit a month ago

I just checked a wiki, and the "MediaWiki:Common.js" page there was read-only, even for wikisysop users.

  • bawolff a month ago

    You need to be a special type of admin, called "interface-admin" to edit it. Normal admin is not enough.

TZubiri a month ago

There's thousands of copies of the whole wikipedia in sql form though, IIRC it's just like 47GB.

  • eblume a month ago

    Correct. Not sure about a sql archive, but the kiwix ZIM archive of the top 1M English articles including (downsized but not minimized) images is 43GiB: https://download.kiwix.org/zim/wikipedia/

    And the entire English wikipedia with no images is, interestingly, also 43GiB.

garbagecreator a month ago

Another reason to make the default disabling JS on all websites, and the website should offer a service without JS, especially those implemented in obsolete garbage tech. If it's not an XSS from a famous website, it will be an exploit from a sketchy website.

sciencejerk a month ago

I wonder if any poisoned data made it into LLM training data pipelines?

  • ibejoeb a month ago

    Interesting angle. Everyone has already pointed out that there are backups basically everywhere, and from an information standpoint, shaving off a day (or whatever) of edits just to get to a known-good point is effectively zero cost. But I wonder what the cost is of the potentially bad data getting baked into those models, and if anyone really cares enough to scrap it.

krater23 a month ago

Just thought about.

Who wins the most from a Wikipedia outage and has questionable moral views? The same who currently struggles to find paying customers for his services.

The large AI companies.

shevy-java a month ago

This is unfortunate that Wikipedia is under attack. It seems as if there are more malicious actors now than, say, 5 years ago.

This may be unrelated but I also noticed more attacks on e. g. libgen, Anna's archive and what not. I am not at all saying this is similar to Wikipedia as such, mind you, but it really seems as if there are more actors active now who target people's freedom now (e. g. freedom of choice of access to any kind of information; age restriction aka age "verification" taps into this too).

  • jibal a month ago

    Wikipedia is not under attack. Some stupid admin running with full privileges unsandboxed ran a test that grabbed and ran random user scripts, and one of them just happened to be this 2 year old malicious script.

    • tonymet a month ago

      that's a common attack vector -- like leaving malware usb sticks on the ground, knowing an admin will pick it up and insert it.

      Phabricator reveals the ops tasks that WMF admins perform, so attackers can drop malware in common locations and bet on them getting run from time to time.

ForOldHack a month ago

Its been opened. Although I have issues with Wikipedia, being a creep is not a valid response.

0xWTF a month ago

Looking forward to the postmortem...

alansaber a month ago

Not even social engineering, just a guy running public JS scripts. Very oldschool.

j45 a month ago

It's reassuring to know Wikipedia has these kinds of security mechanisms in place.

amai a month ago

Why am I not surprised that the malicious script was from ruwiki?

tantalor a month ago

"Закрываем проект" is Russian for "Closing the project"

lynx97 a month ago

Time to spend some of this excess money on a bit of security tightening? I hear we're talking about a 9 digit figure.

nixass a month ago

I can edit it

Kiboneu a month ago

GOD am I thankful to my old self for disabling js by default. And sticking with it.

edit: lol downvoted with no counterpoint, is it hitting a nerve?

  • Imustaskforhelp a month ago

    > edit: lol downvoted with no counterpoint, is it hitting a nerve?

    I have upvoted ya fwiw and I don't understand it either why people would try to downvote ya.

    I mean, if websites work for you while disabling js and you are fine with it. Then I mean JS is an threat vector somewhat.

    Many of us are unable to live our lives without JS. I used to use librewolf and complete and total privacy started feeling a little too uncomfortable

    Now I am on zen-browser fwiw which I do think has some improvements over stock firefox in terms of privacy but I can't say this for sure but I mainly use zen because it looks really good and I just love zen.

    • Kiboneu a month ago

      > I mean, if websites work for you while disabling js and you are fine with it. Then I mean JS is an threat vector somewhat

      It's also been torture, I definitely don't prescribe it. :P Like you say, it's a sanity / utility / security tradeoff. I just happen to be willing to trade off sanity for utility and security.

      And yes, unfortunately I have to enable JS for some sites -- the default is to leave it disabled. And of course with cloudflare I have to whitelist it specifically for their domains (well, the non analytics domains). But thankfully wikipedia is light and spiffy without the javascript.

    • pluralmonad a month ago

      What is uncomfortable about Librewolf? I thought it was basically FF without telemetry and UBO already baked in?

      • Imustaskforhelp a month ago

        I appreciate librewolf but when I used to use it, IIRC its fingerprinting features were too strict for some websites IIRC and you definitely have to tone it down a bit by going into the settings. Canvases don't work and there were some other features too.

        That being said, Once again, Librewolf is amazing software. I can see myself using it again but I just find zen easier in the sense of something which I can recommend plus ubO obv

        Personally these are more aesthetic changes more than anything. I just really like how zen looks and feels.

        The answer is sort of, Just personal preference that's all.

epicprogrammer a month ago

[flagged]

  • marginalia_nu a month ago

    > [...] is incredibly insidious. It really exposes the foundational danger of [...]

    My LLM sense is tingling.

    • amenhotep a month ago

      I opened his post history and scrolled down a bit and literally the first thing I saw was a comment starting with "You're absolutely right" lol

    • sefrost a month ago

      Yeah, it's like the really high-energy way it's written or something? Can't quite put my finger on it.

  • quantum_magpie a month ago

    Could you point to where you found the details of the exploit? It’s not in the linked page. Really interested. Especially the part about modifying it and the other users propagating it?

    • homebrewer a month ago

      The fact of this obvious LLM slop being at the top of this discussion is incredibly insidious. The "facts" it mentions are made up. Has this vapid style finally become so normalized that nobody is seeing it anymore?

      • 256_ a month ago

        I didn't even notice it until you pointed it out, but I checked that account's comment history and it uses em dashes. Also, "the database history itself is the active distribution vector" Is just semantic nonsense.

        I still have a basic assumption that if something I'm reading doesn't make much sense to me, I probably just don't understand it. Over the last few years I've had to get used to the new assumption that it's because I'm reading LLM output.

        • homebrewer a month ago

          I've also always used em-dashes, it's not a very reliable indicator. That style is a dead giveaway, though. Some of its comments seem to be written by a human, but several definitely aren't.

          I've been spending less and less time here, the moderation is obviously overwhelmed and is losing the battle.

          https://aphyr.com/posts/389-the-future-of-forums-is-lies-i-g...

          • jddj a month ago

            The dead internet arrived slowly, then all at once

        • jibal a month ago

          It's not semantic nonsense, it's the truth per the incident reports ... go read the links that have been added up top.

      • infinitewars a month ago

        That user, epicprogrammer's comment history suggests alignment with the Musk/Thiel/Anduril/DoW/anti-Anthropic crowd who are incessantly trying to damage Wikipedia's reputation to push a "Grokipedia" where they can define the narrative.

        I wouldn't be surprised if that group were the origin of this attack too.

      • JKCalhoun a month ago

        Perhaps we're at last watching the internet die.

        • NoMoreNicksLeft a month ago

          Yes, but we did that over the last 15 years. We just never realized that's what we were seeing.

          It only clicked for me a few weeks ago, in one thread or another here when I realized that no one could ever do what Google did once: Cloudflare and other antibot technologies have closed off traditional search-as-the-result-of-web-crawling permanently. It's not that no one will do it because they think there's no money in it, or that no one will do it because the upfront costs are gigantic... literally it can no longer be done.

          The internet died.

          • Imustaskforhelp a month ago

            There are still a few options. I recently had the idea of doing search engine queries on 9 search engines.

            Mojeek is a good independent search browser, it isn't the best but at that Hackernews comment/analysis I was doing I found it to be the only one which worked for that case.

            Brave exists too.

            I know the situation is very critical/dire tho but there is still some chance. All be it quite small.

            Mojeek IIRC, is operated by one single guy for 15 years.

      • jibal a month ago

        The facts are not made up--check the incident reports.

        Most claims of LLM authorship are erroneous.

CloakHQ a month ago

[flagged]

  • foltik a month ago

    Stop posting this AI-generated word salad.

    This was an XSS attack. A malicious script was executed inside an admin’s already authenticated browser context, allowing said malicious script to place itself into public facing pages. Nothing to do with any browser fingerprinting nonsense you’re going on about.

    • tadfisher a month ago

      You can report them via hn@ycombinator.com

      I've seen a few obvious LLM spammers get banned minutes after reporting. Dang does good work.

cc-d a month ago

we pumpin

256_ a month ago

Here before someone says that it's because MediaWiki is written in PHP.

  • Dwedit a month ago

    PHP is the language where "return flase" causes it to return true.

    https://danielc7.medium.com/remote-code-execution-gaining-do...

    • m4tthumphrey a month ago

      Also the language that runs half of the web.

      Also the language that has made me millions over my career with no degree.

      Also the language that allows people to be up and running in seconds (with or without AI).

      I could go on.

      • dspillett a month ago

        > Also the language that has made me millions over my career with no degree.

        Well done.

        > Also the language that allows people to be up and running in seconds (with or without AI).

        People getting up and running without any opportunity to be taught about security concerns (even those as simple as the risks of inadequate input verification), especially considering the infamous inconsistency in PHP's APIs which can lead to significant foot-guns, is both a blessing and a curse… Essentially a pre-cursor to some of the crap that is starting to be published now via vibe-coding with little understanding.

      • jjice a month ago

        PHP is a fine language. It started my career. That said, it has a lot of baggage that can let you shoot yourself in the foot. Modern PHP is pretty awesome though.

        • radium3d a month ago

          Pretty sure we've seen people coding in essentially every other programming language also shoot themselves in the foot.

          • Sohcahtoa82 a month ago

            Every language has foot-guns of some sort. The difference is how easy it is to accidentally pull the trigger.

            PHP makes it easy.

            • radium3d 23 days ago

              Back in the day people were all about languages like C that made it incredibly easy too.

              • Sohcahtoa82 23 days ago

                We didn't have anything better unless you wanted to take a massive performance hit and/or lose a ton of flexibility and capability.

          • jjice a month ago

            Yeah of course PHP isn't the only programming language you can write bugs in. I don't think you can make it impossible to shoot yourself in the foot, but PHP gives you more opportunities than some other languages, especially with older PHP standard library functions.

            One thing I particularly hate is when functions require calling another function afterwards to get any errors that happened, like `json_decode`. C has that problem too.

            Problems don't make it a _bad_ programming language. All languages have problems. PHP just has more than some other languages.

      • ramon156 a month ago

        The language is not what makes you nor the product. You could've written the same thing in RoR, PHP was just first and it's why it still exists

        • stackghost a month ago

          PHP performance is significantly better than Ruby on Rails, which I think plays a part in its continued popularity.

      • ChrisMarshallNY a month ago

        I use it on the backends of my stuff.

        Works great, but, like any tool, usage matters.

        People who use tools badly, get bad results.

        I've always found the "Fishtank Graph" to be relevant: https://w3techs.com/technologies/history_overview/programmin...

        • mannykannot a month ago

          People who use tools badly inflict bad results on other people, quite often far more so than they do so on themselves.

          • ChrisMarshallNY a month ago

            Yeah. It's funny how companies don't like to hire people that use tools correctly, but insist on creating tools that allow them to hire cheaper, less-qualified people.

            PHP works fine, if you're a halfway decent programmer. Same with C++.

      • onion2k a month ago

        Also the language that runs half of the web.

        The bottom half.

        ;)

      • cwillu a month ago

        Try not to take criticisms of tools personally. Phillips head screws are shit for a great many applications, while simultaneously being involved in billions of dollars of economic activity, and being a driver that everyone has available.

      • theamk a month ago

        Yep, that's the sad truth - a language popularity often has nothing to do with it's security properties. People will happily keep churning out insecure junk as long as it makes them millions, botnet and data compromises be damned.

      • m4tthumphrey a month ago

        I can't edit nor be bothered to reply to all of the negative responses so I'll put it here.

        Pretty much all of you missed the larger point. PHP was what allowed me to not work in retail forever, buy a forever house, never have to worry about losing my job (this may change in the future with AI) or being at risk for redundancy, having chosen to only work for small, "normal" well run profitable businesses.

        Unless you're building a hyper scale product, it does the job perfectly. PHP itself is not a security issue; using it poorly is, and any language can be used poorly. PHP is still perfectly suitable for web dev, especially in 2026.

      • radium3d a month ago

        PHP is insanely great, and very fast. The hate has no clout.

      • msla a month ago

        > Also the language that has made me millions over my career with no degree.

        "You can't hate rum, it's made me so much money!"

      • jasonjayr a month ago

        Perl still runs the other half?

    • 420official a month ago

      FWIW this was fixed in 2020

      • dspillett a month ago

        I've not used PHP in anger in well over a decade, but if the general environment out there is anything like it was back then there are likely a lot of people, mostly on cheap shared hosting arrangements, running PHP versions older than that and for the most part knowing no better.

        That isn't the fault of the language of course, but a valid reason for some of the “ick” reaction some get when it is mentioned.

        • Joel_Mckay a month ago

          PHP had its issues like every language, but also a minimal memory footprint, XML/SOAP parser, and several SQL database cursor options.

          Most modern web languages like nodejs are far worse due to dependency rot, and poor REST design pattern implementations. =3

          • dspillett a month ago

            > languages like nodejs are far worse due to dependency rot

            Yep. Node-based projects sometimes get an “ick” reaction from me similar to PHP ones for that reason. In this case it also isn't really the languages fault, but the way people have built the ecosystem around it.

    • ale42 a month ago

      Except that in a contemporary PHP that doesn't work any more.

        PHP Warning:  Uncaught Error: Undefined constant "flase" in php shell code:1
      
      This means game over, the script stops there.
pKropotkin a month ago

[flagged]

  • softskunk a month ago

    care to elaborate?

    • yomismoaqui a month ago

      If I had to guess it's the typical "people with power behaving like dicks".

      • pKropotkin a month ago

        Absolutely. We know plenty of examples where these arseholes trash genuinely valuable contributions from volunteers just on a whim.

yabones a month ago

[flagged]

  • gadders a month ago

    "The Wikimedia Foundation, which operates Wikipedia, reported a total revenue of $185.4 million for the 2023–2024 fiscal year (ending June 2024). The majority of this funding comes from individual donations, with additional income from investments and the Wikimedia Enterprise commercial API service."

    (Unless this was satire and I missed it)

    • josefresco a month ago

      What's the operating budget for other websites with comparable traffic? Without context $185 million seems like a lot, but compared to what? Reddit's operating budget for the same timeframe was $1.86 billion.

      • gadders a month ago

        I agree, but it's not a shoestring budget. They also seem to run a surplus every year:

        The Wikimedia Foundation (WMF) maintains a significant financial surplus and a growing, healthy balance sheet, with net assets reaching approximately $271.5 million in the 2023–2024 fiscal year. This surplus is largely driven by consistent, high-volume, small-dollar donations, with total annual revenue often exceeding $180 million.

        • josefresco a month ago

          Surplus is a good thing right? Long term stability, responsible financial management, healthy margins? If they said one year "You know what? We're good on donations this year." it would never be restarted.

          • gadders a month ago

            Very prudent, but far from "operating on a shoestring".

    • skrtskrt a month ago

      I think the question might be how much money, effort, and expertise is going into the platform itself.

  • cm2012 a month ago

    Wikipedia probably actively wastes $100m per year

    • ale42 a month ago

      On what? I'd be curious to read more (documented sources)

      • kbolino a month ago

        Where and how they spent their money is on p. 21 of this PDF [1] which can be obtained from this official source [2]. This is just a high-level breakdown, but it does illustrate that, for example, more than twice as much is spent on "Donation processing expenses" ($7.5M) as "Internet hosting" ($3.1M), and that the largest line item, by far, is "Salaries and benefits" ($106M).

        [1]: https://wikimediafoundation.org/wp-content/uploads/2025/04/W...

        [2]: https://wikimediafoundation.org/annualreports/2023-2024-annu...

        • streetfighter64 a month ago

          Well obviously salaries will be the highest expense in any organization like this. The more interesting question is if it's salaries to security programmers or teachers at an african womens' coding bootcamp (yes they did spend money on that, and yes it's probably useful, but hardly what people think of when they see those "donate now to keep wikipedia alive" banners). A big percentage probably goes to their CEO who does who knows what.

          • kbolino a month ago

            There are a couple of ways to approach this information. One is to compare to the past. For example, comparing with 2008-2009 [1], they now spend 3.75 times as much on hosting, but 48 times as much on salaries, illustrating a more-than-tenfold relative growth in salaries compared to hosting. While hosting is not now nor ever was their only relevant expense, it is a good anchor point.

            Another key difference over the last 15 years has been the introduction of awards and grants, which didn't exist then but now comprise $26.8M (15%) of their expenditures. This is where most of the ideological/controversial spending actually goes, rather than the salaries per se, but even more to the point, this one line item is more than 3 times their entire inflation-adjusted budget from 15 years ago ($5.6M times 150% CPI = $8.4M) and is still more than if we adjusted their entire budget using the hosting cost as an index ($5.6M times 3.75 = $21M).

            [1]: https://upload.wikimedia.org/wikipedia/commons/a/a4/WMF_Annu...

            • streetfighter64 a month ago

              Look, I'm not defending wikipedia, I'd just like to point out that comparing hosting to salaries is a quite strange metric. Hosting is cheap and relatively constant, adding features to the site or paying admins to maintain the quality of edits is scalable. How does throwing more money at hosting make a better product? It's not like the servers can't handle the requests.

              Using hosting costs as an index is nonsensical. I wasn't able to find numbers for 2009, but since 2015 the monthly page views have remained almost exactly constant. So you might as well claim that they're vastly overpaying for hosting since inflation from 2008 is way less than 3.75x.

              • kbolino a month ago

                I picked hosting because it's a line item that exists across all of their budgets, it's a rough proxy for a web business's non-salary expenses, it's a big part of what you think you're donating to based upon Wikipedia's own language in their fundraising drives, and if nothing else, it's way more forgiving to the growth of their expenses than consumer price inflation is.

                Ultimately every person has to decide for themselves whether they think WMF is a worthy recipient for their donations, but it is in no way operating on a shoestring budget nor staffed by volunteers anymore.

  • Markoff a month ago

    please stop spreading lies, Wikipedia is swimming in money and they have money for years or even decades if they would not waste them on various seminars and other nonsense unrelated to running Wikipedia

  • SoftTalker a month ago

    Society and culture were fine before Wikipedia. I could argue that they have degraded substantially since Wikipedia came into being (but correlation is not causation, in either direction).

Uhhrrr a month ago

How do they know? Has this been published in a Reliable Source?

  • nhubbard a month ago

    This is the official Wikimedia Foundation status page for the whole of Wikipedia, so it's a reliable primary source.

    • vova_hn2 a month ago

      Actually, usage of primary sources is kinda complicated [0], generally Wikipedia prefers secondary and tertiary sources.

      [0] https://en.wikipedia.org/wiki/Wikipedia:No_original_research...

      • jkaplowitz a month ago

        Yeah, but the purpose of an encyclopedia like Wikipedia (a tertiary source) is to relatively neutrally summarize the consensus of those who spend the time and effort to analyze and interpret the primary sources (and thus produce secondary sources), or if necessary to cite other tertiary summaries of those.

        In a discussion forum like HN, pointing to primary sources is the most reliable input to the other readers' research on/synthesis of their own secondary interpretation of what may be going on. Pointing to other secondary interpretations/analyses is also useful, but not without including the primary source so that others can - with apologies to the phrase currently misused by the US right wing - truly do their own research.

        • Uhhrrr a month ago

          If you spend any time on Wikipedia, you'll find that secondary sources from an existing list are always preferred. The mandate from the link in GP (https://en.wikipedia.org/wiki/Wikipedia:No_original_research) extends, or at least is interpreted to mean to extend to, actively punishing editors who attempt to analyze or interpret primary sources.

          My original post was a joke about this.

skrtskrt a month ago

Long past time to eliminate JavaScript from existence

  • krisoft a month ago

    You will have a long trek to do that. We have a javascript interpreter deployed at the second Sun-Earth Lagrange point.

    https://www.theverge.com/2022/8/18/23206110/james-webb-space...

    • dgxyz a month ago

      I live happily in the knowledge that in 20000 years when that eventually drifts off into another system and is picked up by aliens that they will reverse engineer it and wonder why the fuck '5'-'4'=1

  • msla a month ago

    Yep, WASM is so much more secure.

  • dgxyz a month ago

    This.

    Actually fuck the whole dynamic web. Just give us hypertext again and build native apps.

    Edit: perhaps I shouldn't say this on an VC driven SaaS wankfest forum...

    • rainingmonkey a month ago

      You may be interested in https://geminiprotocol.net/

      • skrtskrt 23 days ago

        A protocol “focused on reading” that doesn’t allow inline images in the document is completely unserious. Images predate text and are 100% essential in most forms of communication.

      • dgxyz a month ago

        Yes that's exactly what we should be using. Totally agree.

    • Ekaros a month ago

      Doing some security work now. And it seems half of my problems are because some other site get to run any random code so they might call my site. And I have to protect against that. I am somewhat annoyed. Why is this design acceptable in first place?

    • streetfighter64 a month ago

      Imagine if wikipedia was a native app, what this vuln would have caused. I for one prefer using stuff in the browser where at least it's sandboxed. Also, there's nothing stopping you from disabling JS in your browser.

      • dgxyz a month ago

        Wikipedia should be straight hypermedia. Simple.

      • Dylan16807 a month ago

        If it was a native app it wouldn't be grabbing one of the hosted files and running it as code.

        • streetfighter64 a month ago

          Have you never seen a native app's auto-update get hijacked by malware? It happened (yet again) last month [0]

          Tons of native apps also have plugins or addons, which (surprise surprise) is just code downloaded from some central repo, and run with way less sandboxing than JS.

          [0] https://www.bleepingcomputer.com/news/security/notepad-plus-...

          • Dylan16807 a month ago

            That's pretty far from hosting the program in the same spot the content it manages is hosted, and also installing fresh versions instantly.

    • dlivingston a month ago

      I mean sure, but that's never going to happen, so complaining about it is just shaking your fist at the sky. The only way it will change is if the economics of the web change. Maybe that is the economics of developer time (it being easier/fast/more resilient and thus cheaper to do native dev), or maybe it is that dynamic scripting leads to such extreme vulnerabilities that ease of deployment/development/consumer usage change the macroeconomics of web deployment enough to shift the scales to local.

      But if there's one thing I've learned over the years as a technologist, it's this: the "best technology" is not often the "technology that wins".

      Engineering is not done in a vacuum. Indeed, my personal definition of engineering is that it is "constraint-based applied science". Yes, some of those constraints are "VC buxx" wanting to see a return on investment, but even the OSS world has its own set of constraints - often overlapping. Time, labor, existing infrastructure, domain knowledge.

      • dgxyz a month ago

        I think it will change.

        The entire web is built on geopolitical stability and cooperation. That is no longer certain. We already have supply chains failing (RAM/storage) meaning that we will be hardware constrained for the foreseeable future. That puts the onus on efficiency and web apps are NOT efficient however we deliver them.

        People are also now very concerned about data sovereignty whereas they previously were not. If it's not in your hands or on your computer than it is at risk.

        The VC / SaaS / cloud industry is about to get hit very very hard via this and regulation. At that point, it's back to native as delivery is not about being tied to a network control point.

        I've been around long enough to see the centralisation and decentralisation cycles. We're heading the other way now

        • dlivingston a month ago

          I think on a high level we're in agreement then. All of those points you mentioned are constraints.

          > "VC / SaaS / cloud industry is about to get hit very very hard via ... regulation"

          can you explain?

          • dgxyz a month ago

            Why? Well mostly due to the unpredictable behaviour of the country which seems to have the control points of most infra these days.

            How? Well the numerous non-US sovereign technology initiatives are going to be incentivised through regulation with local compliance being the only option going forwards.

            As a non-US person I am already speaking to people at other orgs in similar space as ours who are looking at options there.

MagicMoonlight a month ago

They have no incentive to improve the site, because they’re a for-profit entity.

Despite the constant screeching for donations, the entire site is owned by a company with shareholders. All the “donations” go to them. They already met their funding needs for the next century a long time ago, this is all profit.

  • charonn0 a month ago

    That's a serious accusation. Can you elaborate? What is the name of the company? Why does the Wikimedia Foundation claim ownership? And if you're referring to the Wikimedia Foundation, then what do you mean by "shareholders"?

  • circlefavshape a month ago

    This is total horseshit.

    /me wonders if you're a bad actor or just delusional

dlcarrier a month ago

I've never understood why client-side execution is so heavy in modern web pages. Theoretically, the costs to execute it are marginal, but in practice, if I'm browsing a web page from a battery-powered device, all that compute power draining the battery not only affects how long I can use the device between charges, but is also adding wear to the battery, so I'll have to replace it sooner. Also, a lot of web pages are downright slow, because my phone can only perform 10s of billions of operations per second, which isn't enough to responsively arrange text and images (which are composited by dedicated hardware acceleration) through all of the client-side bloat on many modern web pages. If there was that much bloat on the server side, the web server would run out of resources with even moderate usage.

There's also a lot of client-side authentication, even with financial transactions, e.g. with iOS and Android locally verifying a users password, or worse yet a PIN or biographic information, then sending approval to the server. Granted, authentication of any kind is optional for credit card transactions in the US, so all the rest is security theater, but if it did matter, it would be the worst way to do it.