rossdavidh 3 months ago

Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses:

1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability.

2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change

3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted

4) Just a series of unconnected errors at big companies

5) Other possibilities?

  • bdd 3 months ago

    #4

    I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4.

    #1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.

    • fossuser 3 months ago

      Yep, this also matches what I've heard through the grapevine.

      Pushing bad regex to production, chaos monkey code causing cascading network failure, etc.

      They're just different accidents for different reasons. Maybe it's summer and people are taking more vacation?

      • degenerate 3 months ago

        I actually like the summer vacation hypothesis. Makes the most sense to me - backup devs handling some things they are not used to.

        • Balgair 3 months ago

          So, a reverse Eternal-September? It'll get better once everyone is back from August vacations?

          • uber-employee 3 months ago

            No, because it’ll only get better until next summer.

        • Avamander 3 months ago

          These outages mean that software only gets more ~fool~ summer employee proof.

      • kenhwang 3 months ago

        I'm more partial to the summer interns hypothesis.

        • jmharvey 3 months ago

          I agree with this, but to be clear, the "summer interns hypothesis" is not "summer interns go around breaking stuff," it's "the existing population of engineers has finite resources, and when the interns and new grads show up, a lot of those resources go toward onboarding/mentoring the new people, so other stuff gets less attention."

        • gowld 3 months ago

          Pretending that junior engineers is the problem, is the problem.

          • rapind 3 months ago

            Just checking what your objection is. Is it that you think experience is overrated, or is it just that he was speculating without any evidence?

            • dan-0 3 months ago

              Can't speak for OP, but I can tell you what mine is.

              If you have an intern or a Junior Engineer, they should have a more senior engineer to monitor and mentor them.

              In the situation where a Junior Engineer gets blamed for a screw up:

              1. The Senior Engineer failed in their responsibility. 2. The Senior Engineer failed in their responsibility.

              A Junior Engineer should be expected to write bad code, but not put it into production, that's on the Senior. If I hit approve on a Junior Engineer's PR, it's my fault if their code brings the whole system down. If a Junior Engineer had the ability to push code without a review, it's my fault for allowing that. Either way it's my fault and it shouldn't be any other way. It's a failure to properly mentor. Not saying it doesn't happen, just that it's never the Junior Engineers fault when it does.

              • deathanatos 3 months ago

                I'd caveat that slightly: only if the senior engineer is not also overburdened with other responsibilities, and the team has the capacity to take on the intern in the first place. I've been on teams where I felt like we desperately needed more FTEs, not interns. But we could hire interns, and not FTEs.

                (I agree with the premise that an intern or junior eng is supposed to be mentored, and their mistakes caught. How else should they learn?)

              • tiredyam 3 months ago

                the amount of time that the summer intern / new grad eat up of seniors time is the problem. Tech debt that does not get addressed in a timely manner because of mentorship responsibilities is the problems

            • echelon 3 months ago

              If you don't train new and capable engineers, you'll eventually lose talent due to attrition and retirement. Talent can be grown in-house; engineering companies are much better environments than universities to learn how to build scalable platforms. The cost of acquisition is low, too, because junior engineers can still make valuable contributions while they learn to scale their impact.

            • melq 3 months ago

              If interns are able to take down your infrastructure, then it is the fault of the senior engineers who have designed it in a way that would allow that to happen.

        • bobthepanda 3 months ago

          Rule one of having interns and retaining your sanity is that interns get their own branch to muck around in.

          • jrockway 3 months ago

            Rule one of having a useful intern experience is to get them writing production code as quickly as possible. They check in their first change? Get that thing into production immediately. (If it's going to destabilize the system, why did you approve the CL? You two probably pair programmed the whole thing together.)

            • HeWhoLurksLate 3 months ago

              I completely agree- even if it's something small.

              I'm an intern in a big company with an internal robotics and automation group, and I recently got to wire up a pretty basic control panel, install it, and watch workers use it. That was so cool, and made me appreciate what I was doing a lot more.

          • kenhwang 3 months ago

            Sure. The interns have their own branch, but it doesn't stop them from being disruptive to the human in charge of mentoring them.

          • vorticalbox 3 months ago

            All changes should be in a new branch.

            • devin 3 months ago

              I used to believe this. Having solid lower environments which are identical to production, receiving live traffic where engineers can stage changes and promote up removes some of the “all things should live on a branch” business. I know that sounds crazy, but it is possible for teams of the right size to go crazy on master as long as the safety nets and exposure to reality are high enough in lower environments.

      • kdelok 3 months ago

        I recall someone saying that holiday periods actually had better reliability for their services, because fewer people were pushing breaking changes...

        I do wonder if it's that the usual maintainers of particular bits and pieces are on vacation and so others are having to step in and they're less familiar or spread too thin.

    • cmroanirgo 3 months ago

      Yes, but it always seems to come down to a very small change with far reaching consequences. For this ongoing twitter outage, it's due to an "internal configuration change"... and yet the change has wide reaching consequences.

      It seems that something is being lost over time. In the old days of running on bare metal, yes servers failed for various reasons, then we added resiliency techniques whose sole purpose was to alleviate downtime. Now we're at highly complex distributed systems that have failed to keep the resiliency up there.

      But the fact that all the mega-corps have had these issues seems to indicate a systemic problem rather than unconnected ones.

      Perhaps a connection is the management techniques or HR hiring practices? Perhaps it's due to high turnover causing the issue? (Not that I know, of course, just throwing it out there). That is, are the people well looked after and know the systems that are being maintained? Even yourself who's 'been around the traps' with high profile companies: you have moved around a lot... Were you unhappy with those companies that caused you to move on? We've seen multiple stories here on HN about how those people in the 'maintenance' role get overlooked for promotions, etc. Is this why you move around? So, perhaps the problem is systemic and it's due to management who've got the wrong set of metrics in their spreadsheets, and aren't measuring maintenance properly?

      • mlinsey 3 months ago

        I remember all these services being far less reliable in the past. The irony of us talking about the bygone era of stability in the context of Twitter is particularly hilarious.

        I do think that internet services in general are much more mission critical, and the rate of improvement hasn’t necessarily kept up. It used to be not particularly newsworthy if an AWS EBS outage took out half the consumer internet several times per year, or if Google’s index silently didn’t update for a month, or when AOL (when they were by far largest ISP in the US) was down nationwide for 19 hours, or the second-biggest messaging app in the world went down for seven days.

        • selestify 3 months ago

          Which app was down for 7 days?

      • bdd 3 months ago

        I don't see the value in lamenting the old days of a few machines where you could actually name them as Middle Earth characters, install individually, log in to one single machine to debug a site issue. The problems were smaller and individual server capacity in respect to demand was in meaningful fractions. Now the demand is so high and set of functions these big companies need to offer are so large, it's unrealistic to expect solutions that doesn't require distributed computing. It comes with "necessary evils", like but not limited to configuration management--i.e. ability to push configuration, near real time, without redeploying and restarting--, and service discovery--i.e. turning logical service names to a set of actual network and transport layer addresses, optionally with RPC protocol specifics. I refer to them as necessary evils because the logical system image of these are in fact single points of failures. Isn't it paradoxical? Not really. We then work on making these systems more resilient to the very nature of distributed systems, machine errors. Then again, we're intentionally building very powerful tools that can also enable us to take everything down with very little effort because they're all mighty powerful. Like the SPoF line above, isn't it paradoxical? Not really :) We then work on making these more resilient to human errors. We work on better developer/operator experience. Think about automated canarying of configuration, availability aware service discovery systems, simulating impact before committing these real time changes, etc. It's a lot of work and absolutely not a "solved problem" in a way single solution will work for any scale operation. We may be great at building sharp tools but we still suck at ergonomics. When I was at Twitter, a common knee-jerk comment at HN was "WTF? Why do they need 3000 engineers. I wrote a Twitter clone over the weekend". A sizable chunk of that many people work on tooling. It's hard.

        You're pondering if hiring practices and turnover might be related? The answer is an absolute yes. On the other hand, these are the realities of life in large tech companies. Hiring practices change over years because there's a limited supply of of candidates experienced in such large reliability operations and industry doesn't mint many of them either. We hire people from all backgrounds and work hard on turning them to SREs or PEs. It's great for the much needed diversity (race, gender, background, everything) and I'm certain the results will be terrific but we need many more years of progress to declare success and pose in front of a mission accomplished banner on an aircraft carrier ;)

        You are also wisely questioning if turnover might be contributing to these outages and prolonged recovery times. Without a single doubt, again the answer is yes but it's not the root cause. Similar to how hiring changes as company grows, tactics for handling turnover has to change too. It's not like people leave the company, but within the same company they move on and work on something else. The onus is on everyone, not just managers, directors, VPs to make sure we're building things where ownership transfer us 1) possible 2) relatively easy. This in mind, veterans in these companies approach code reviews differently. If you have tooling to remove the duty of nitpicking about frigging coding style, and applying lints, then humans can indeed give actually important feedback on complexity of operations, self describing nature of code, or even committing things along with changes to operations manual living in the same repo.

        I think you're spot on with your questions but what I'm trying to say with this many words and examples is, nothing alone is the sole perpetrator of outages. A lot of issues come together and brew over time. Good news, we're getting better.

        Why did I move around? Change is what makes life bearable. Joining Twitter was among the best decisions in my career. Learned a lot, made lifelong friends. They started leaving because they were yearning a change Twitter couldn't offer. I wasn't any different. Facebook was a new challenge, I met people I'd love to work with and decided give it a try. I truly enjoy life there even though I'm working on higher stress stuff. Facebook is a great place to work but I'm sure I can't convince even %1 of HN user base, so please save your keyboards' remaining butterfly switch lifetime, don't reply to tell me how much my employer sucks :) I really hope you do enjoy your startup jobs (I guess?) as much as I do my big company one.

        • eecc 3 months ago

          Not sure where you’re going, but my take is that yes, the times for calling servers individually are over.

          But we’re still touching the belly of our distributed systems with very pointed tools as part of the daily workflow. That’s how accidents happen.

          The analogy is clear IMHO; just as we’ve long stopped fiddling daily with the DRAM timings and clock multipliers of the Galadriel and Mordor servers, we should consider abstaining from low level “jumper switching” on distributed systems.

          Of course, this also happened thanks to industry introducing PCI and automated handshaking...

        • wbl 3 months ago

          Those days of yore are when computers did things and we wrote programs that satisfied immediate needs. There was also a social element to it when there were multiple users per machine.

        • thrwayxyz 3 months ago

          >It's great for the much needed diversity (race, gender, background, everything) and I'm certain the results will be great

          So far results are disappointing. At what point do we admit that you need Asperger's to be a good programmer?

    • mastratton3 3 months ago

      lol yes, whats the quote on "Don't assume bad intention when incompetence is to blame"?

      After seeing how people write code in the real world, I'm actually surprised there aren't more outages.

      • jethro_tell 3 months ago

        Well we have an entire profession of SRE/Systems Eng roles out there that are mostly based on limiting impact for bad code. Some of the places I've worked with the worst code/stacks had the best safety nets. I spent a while shaking my head wondering how this shit ran without an outage for so long until I realized that there was a lot of code and process involved in keeping the dumpster fire in the dumpster.

        • devin 3 months ago

          Which do you prefer? Some of the best stacks and code I’ve worked in wound up with stability issues that were a long series of changes that weren’t simple to rework. By contrast, I’ve worked in messy code, complex stacks, that gave great feedback. In the end, the answer is I want both, but I actually sort of prefer “messy” with well thought out safety nets to beautiful code and elegant design with none.

          • jethro_tell 3 months ago

            One thing that stands out from both types of stacks that I've worked with, is that most of the time, doing things simply the first time without putting in a lot of work to guess what other complications will arise later tends to produce a stack with a higher uptime even if the code gets messy later.

            There are certainly some things to plan ahead for, but if you start with something complex it will never get simple again. If you start with something simple, it will get more complex as time goes by but there is a chance that the scaling problems you anticipated present in a little different way and there's a simple fix.

            I like to say, 'Simple Scales' in design reviews and aim to only add complexity when absolutely necessary.

      • newsbinator 3 months ago

        Hanlon's Razor: https://en.wikipedia.org/wiki/Hanlon%27s_razor

        "Never attribute to malice that which is adequately explained by stupidity."

        • euske 3 months ago

          I always thought that this cause should also include "greed". But then, greed is kinda one step closer to malice, and I'm not sure if there's a line.

        • rossdavidh 3 months ago

          Ah, but that's a lot of big corps being more stupid in the last month than last year? If it's two or three more, that's normal variation. We're now at something more like 7 or 8 more. The industry didn't get that much stupider in the last year.

    • aaroninsf 3 months ago

      I will observe, without asserting that it is actually the case,

      that successful executions of #3 should be indistinguishable from #4.

      (And this is maybe a consequence of #1).

    • Diederich 3 months ago

      I've also worked at a couple of the companies involved.

      This is the correct analysis on every level.

    • kwizzt 3 months ago

      How does the fact you worked at those companies relate to #4?

      Edit: I misread the parent and my question doesn't make a lot of sense. Please ignore it :)

      • bdd 3 months ago

        > How does the fact you worked at those companies relate to #4?

        For Facebook I worked on the incident, previous Wednesday. 9.5 hours of pain...

        And for my past employers, I still have friends there texting the root causes with facepalm emojis.

    • GrumpyNl 3 months ago

      Turned out to be number #1 The outage was due to an internal configuration change, which we're now fixing. Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible.

    • captn3m0 3 months ago

      Can you clarify what redefining problems would mean (with an eg)?

      • GuiA 3 months ago

        Think of computer vision tasks. Until modern deep learning approaches came around, it was built on brittle, explicitly defined pipelines that could break entirely if something minor about the input data changed.

        Then the great deep learning wave of 201X happened, replacing dozens/hundreds of carefully defined steps with a more flexible, generalizable approach. The new approach still has limitations and failure cases, but it operates at a scale and efficiency the previous approaches could not even dream of.

        • MegaButts 3 months ago

          That's not redefining the problem, so much as applying a new technology to solve the same problem. Usually using the flashy new technology decreases reliability due to immature tooling, lack of testing, and just general lack of knowledge of the new approach.

          Also deep learning, while incredibly powerful and useful, is not the magic cure-all to all of computer vision's problems and I have personally seen upper management's misguided belief in this ruin a company (by which I mean they can no longer retain senior staff, they have never once hit a deadline, every single one of their metrics is not where they want it to be, and a bunch of other stuff I can't say without breaking anonymity).

    • idlewords 3 months ago

      FAANG(+T)(-N)(+M)

      • 18pfsmt 3 months ago

        I think we 'bumped heads' at Middlebury in '94, and I think you are in store for an "ideological reckoning" w/in 3 years.

        Pinboard is a great product, so thanks for that. I am surpised you don't have your own Mastodon instance (or do you?).

    • gcbw2 3 months ago

      since all of them happen in high profile business hours, i'd guess either #1 or #5.

      For #4 to be the actual cause, outages out of business hours would be more prevalent and longer.

    • iamtheworstdev 3 months ago

      Faangt = Facebook amazon Apple Netflix Google tesla?

      • gsich 3 months ago

        Gmafia

      • arrty88 3 months ago

        Add slack to the list

        Edit: and stripe

    • gjs278 3 months ago

      it sounds like you’re the common factor between the outages. where else have you worked, maybe we can predict the next failure

    • loblollyboy 3 months ago

      Wow, such certainty. "I worked at these companies that went down, so I knew and still know everything, I can even rule out the possibility of other organizations which I did not work for screwing with them, because I know everything about those too. And yes, these FANGs are complex organizations, but not so complex that a former employee like myself wouldn't know what the cause of an outage is or at least isn't (Hell, I'd fix it, but I have to finish explaining why complexity is not the reason why they are crashing, and the series of ten dollar words that constitute my explanation aren't exactly the quickest to type."

      • jkaplowitz 3 months ago

        Former employees and current employees talk via unofficial online and offline backchannels at many companies.

      • bdd 3 months ago

        geez, tough crowd. do you wanna ten dollar hug?

        • loblollyboy 3 months ago

          I was just polishing my bit. Not in a bad mood today so much as a bored mood. You seem like you know what you are talking about (yes, I was bored enough to stalk you, too)

          • bdd 3 months ago

            If you are bored one day and around Menlo Park, come have a coffee or ice cream at FB campus. You can troll me in person.

            • 18pfsmt 3 months ago

              Isn't it interesting where this is going? We all want to meet our accusers? I don't care for FB myself, but I appreciate what you all are doing in the larger sense. Cloudflare is my fave of your former employers (since you shared that in this discussion).

          • dang 3 months ago

            Could you please stop posting unsubstantive comments to Hacker News?

    • hexrcs 3 months ago

      Life in tech is like a Quentin Tarantino movie.

      • _jal 3 months ago

        ...except everyone is sitting at desks typing, there's no blood or surf rock or chases or self-indulgent soliloquies, and the cursing is much less creative?

        • jessaustin 3 months ago

          Maybe you're doing it wrong?

        • robohoe 3 months ago

            cursing is much less creative?
          
          I beg to differ.
    • marenkay 3 months ago

      Only one thing to add:

      Tech debt is accrued in amounts where every VC fund would get wet pants if tech debt was worth dollars paid out.

    • wybiral 3 months ago

      I've still never seen this much downtime on these systems so it's weird to happen all at once.

      It's possible that they're related without requiring any conspiracy theories or anything. Maybe these companies are just getting too big or too sloppy to maintain the same standard of uptime (compared to the past few years)? Or maybe there's some underlying issue that they're all rushing to fix which justifies the breaking prod changes within the same timeframe.

      But it was weird when a it happened to two or three of them. Now we're going on something like 5 massive failures from some of the biggest services online within a little over a week...

  • idlewords 3 months ago

    Write a script to fire random events and you will notice they sometimes cluster in ways that look like a pattern.

    • rossdavidh 3 months ago

      You know, it would be cool if you found stats on the downtime metrics of these various high-profile recent outages, and calculated the odds of having such a cluster. Statistics is hard, though, and avoiding a "Texas Bulls-eye" would be hard.

      • wtallis 3 months ago

        "Celebrities die 2.7183 at a time": http://ssp.impulsetrain.com/celebrities.html

        • gcbw2 3 months ago

          So the only take away is that now the population at large notices tech companies outages as much as they notice celebrity deaths?

          • uxp 3 months ago

            "population at large"

            This thread is linked to a status page run by Twitter, on a programming and technology news site. I'm not really seeing how most people that exist in the western/1st world are noticing this. Is there a CNN article, or FoxNews segment on how tech companies are having outages?

            • gcbw2 3 months ago

              Yes, fox news even suggest it was part of a large coordinated censorship effort on the POTUS :D

              https://www.foxnews.com/tech/twitter-suffers-widespread-outa...

              quote from that url: "The outage came as President Trump was hosting a social media summit with right-wing personalities and tech industry critics who've accused Twitter and other websites of having an anti-conservative bias."

      • mruts 3 months ago

        I mean, we can assume the downtime variance follows a normal distribution. It should pretty easy to calculate P<.05 with just a little bit of data.

        • rossdavidh 3 months ago

          What you say could be true, but I don't know that we can assume it. If downtime requires several things to happen (cascading errors), but those things interact somehow (problem with one makes another more likely), I could imagine it might not be normally distributed. Disclaimer: I Am Not A Statistician.

    • rossdavidh 3 months ago

      Oh, sure. But Apple, Google, Cloudflare, Stripe, Slack, Microsoft, we're getting to more than five even...

      • root_axis 3 months ago

        The logic of the GP still applies though. Sites have outages every day so it is inevitable that some large sites will fail around the same time. Also, we know that Cloudflare and Twitter outages were attributed to configuration changes, probably others have benign explanations as well.

        • rossdavidh 3 months ago

          Sure, but "configuration changes" does not exclude several of these options. For example, is it harder to predict/deal with the consequences of configuration changes than it used to be?

          • root_axis 3 months ago

            Well, the options above cover pretty much every possibility, including the one I'm suggesting.

      • mattwad 3 months ago

        Reddit went down this morning too

        • gpm 3 months ago

          Reddit goes down a lot though in my experience.

        • kabwj 3 months ago

          Reddit being up for 24 hours or generating pages in less than 3 seconds would be noteworthy.

        • gnulinux 3 months ago

          Reddit goes down pretty frequently. It's been that way for years.

      • opencl 3 months ago

        And now Discord is down!

        • gsich 3 months ago

          no loss

    • shobith 3 months ago

      This. I have first hand experience in this phenomenon multiple times. Complexity helps this effect too.

  • JaRail 3 months ago

    First, I think our general uptime metrics are trending upwards. Recovery times tend to be much shorter as well.

    Big services are bigger, more mission-critical parts can fail.

    Continuous development culture is designed with failure as part of the process. We don't spend time looking for obscure issues when they'll be easier to find by looking at metrics. This is fine when a staggered deployment can catch an issue with a small number of users. It's bad when that staggered deployment creates a side-effect that isn't fixed by rolling it back. Much harder to fix corrupted metadata, etc.

    Automated systems can propagate/cascade/snowball mistakes far more quickly than having to manually apply changes.

    We notice errors more now. Mistakes are instantly news.

    • hn_throwaway_99 3 months ago

      > We notice errors more now. Mistakes are instantly news.

      Heck, just look at Twitter itself from its original "Fail Whale" days where there was so much downtime, to now where even this relatively small amount of downtime is the top story on HN for hours.

      • dillonmckay 3 months ago

        So, when it went down, was there a Fail Whale displayed during this most recent incident?

        • Algol 3 months ago

          I think they retired the fail whale some time ago.

          I looked it up: in 2013, because they didn't want to be associated w/ outages.

  • johngalt 3 months ago

    5) Operational reliability is both difficult and unsexy.

    The fancy new feature, increasing traffic, or adding AI to something will generate headlines, accolades, and positive attention. Not having outages is something everyone expects by default. This goes double for work that prevents outages. No one wins awards for what doesn't happen.

    How many medals are pinned on the guys installing fire sprinklers?

    • meristem 3 months ago

      Corollary: Work that prevents outages--or safe work--is SO unsexy it does not get noticed, but work that causes outages is postmortem-ed to death (pun intended).

  • t0astbread 3 months ago

    Or maybe it's because the internet is tendentially becoming just a few companies' data centers? Afaik Twitter moved to GCP a few months ago. Maybe this is another Google outage?

    • azurezyq 3 months ago

      Less likely since it looks fine from GCP status page.

      Hmm, it seems that Twitter already figured it out, configuration change issues again.

      • marenkay 3 months ago

        Probably because we all use Kubernetes and YAML files and 100% of configuration failures are "oh shit, I used two spaces instead of 4, we're fucked".

        • Jach 3 months ago

          Something like this is my bet too, there was a recent post somewhere called something like "why all outages are due to a configuration change". There are monocultures in site reliability ops for big companies, "configuration over code" but with heavy automation too. From my outside view it seems there's a tradeoff when you do that between more frequent smaller issues and less frequent bigger issues. Also reminds me of Google's move away from eventual consistency because with their infrastructure they can make a CP system highly available in practice... except when it isn't, due to a botched configuration change.

    • ionforce 3 months ago

      > tendentially

      Is this a word? You don't mean tangentially? I'm having a crisis right now.

      • t0astbread 3 months ago

        Is it not? Sorry if I got it wrong, English isn't my first language.

        dict.cc (my source of truth for English vocab) says it's a word: https://www.dict.cc/?s=tendentially

        • thfuran 3 months ago

          It's apparently a word but I'd say it's quite uncommon. I played around with google ngram viewer and had a hard time coming up with a word that is less common. But I finally came up with "astrophotographic".

          E: "unsurpassingly" is way down there too

          • t0astbread 3 months ago

            It's common in German, so I figured it wouldn't be uncommon in English. Oh well :)

          • edflsafoiewq 3 months ago

            It's very common in Biblical criticism (transliterated from German).

  • listic 3 months ago

    I (don't) like how you exclude Russia, China, Iran and somebody from your definition of 'us'.

    • Parcle 3 months ago

      His definition of "us" seems to just be "Americans". Which is fine because he's talking about American companies...

    • rossdavidh 3 months ago

      The assumption is that Russia, China, Iran are less dependent on Google, Twitter, etc., in part because some of them aren't allowed to operate in those countries, in part because some of them are much less dominant in those markets. 'Us' means 'people who might care that Twitter (or whoever) is down'.

      • vkou 3 months ago

        Google, Twitter, Reddit, Facebook, etc, all legally operate in Russia.

        • bin0 3 months ago

          But most have regional replacements. WeChat in China, VK (and some Telegram, though it's now blocked?) in Russia. This makes them less reliant on the American originals, which is why governments often encourage home-grown knock-offs.

    • bromuro 3 months ago

      Yes I have been also hit by the same bad feeling. Thanks for pointing it out.

  • godarni 3 months ago

    Lots of people on vacation this time of year. Would be interesting to see if there is a seasonal component to the reliability of these services.

    • syn0byte 3 months ago

      "Don't forget to occasionally do that thing I mentioned in passing 2 weeks ago, under my breath, during a klaxon alarm test. Otherwise the errors will DDoS the whole cluster. See you in a week, goodluck!"

      Nah - that would never happen.

  • papito 3 months ago

    #1. I think the art of keeping things simple is being lost. These days people will mush together ten different cloud services and 5,000 dependencies just for a Hello World.

  • moret1979 3 months ago

    One possibility on 5) Too many KPIs and quarter goals to be reached, too many corners cut.

    • bastijn 3 months ago

      Obligatory to watch with this comment:

      "Let's deploy to production" https://youtu.be/5p8wTOr8AbU

      • kirubakaran 3 months ago

        You know, I've watched a few of those memes over the past, but this one was especially well done, and timed perfectly with his gestures even!

      • jsjohnst 3 months ago

        The only possible way for me to make it more than 20-30 seconds into that was to mute it. That guy’s laugh is multiple orders of magnitude worse than nails on a chalkboard. Funny story (albeit too real), but man, mute before clicking everyone.

      • sampleinajar 3 months ago

        No idea how I haven't seen this, but it totally made my day.

      • winrid 3 months ago

        This hit close to home. Hilarious. Thanks.

  • humanfactor 3 months ago

    1/2) These are web apps. Big web apps but web apps none the less. We know what can go wrong theres nothing really new here. How would you quantify "too many pieces to make work". Is 1000 too many , 10000 ???? There are millions pieces of data on your harddrive and they work fine. In general the idea of variance can be solved with redundancy. Maybe there are not enough backups at twitter.

    5/4) Incompetent people lead by incompetent people surrounded by yes men and a drug culture. Also having a company that demonizes conservatives which are some of the best engineers (scientist are squares naturally)

    Human error is bound to happen and software is complex but so are rockets and supply chains. Things can go right and things can go wrong. Usually when they do go wrong there is a human error reason.

    Does twitter foster a place where human error can occur more frequently that other places? I dont know. I have my bias about the company and any sjw company but thats very anecdotal.

    Twitter worked yesterday and it doesnt work today. Doesnt really have to mean anything really important except for the fact that there is a blind spot in their process which they need to harden.

    I guess the first person to ask is the dev op , then the developer. Something wasnt tested enough. That happens in commercial software, deadlines cant wait.

    3)Russia / China / Iran ... stop watching CNN. You are parroting talking points. If twitter is crushed America could care less and would probably turn up sanctions, not lift them. Taking down twitter wont cripple anything in America except for certain marketers budgets.

    • papito 3 months ago

      Scientists are squares but they also have a brain. That's why they are usually not conservatives. Conservatives are not a party, it's a herd of paranoid people who tune into Fox News every night to be told what to be afraid of next, but it's definitely not engineers or scientists.

  • outworlder 3 months ago

    Brains are excellent pattern matchers.

    Brains also suck at statistics.

    • jimmaswell 3 months ago

      This is the first time I can remember so much happening so close. It's statistically unlikely.

  • pennaMan 3 months ago

    >July 11, 2019 7:56PM UTC[Identified] The outage was due to an internal configuration change, which we're now fixing. Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible.

    Seems #4 is the winner

    • gniv 3 months ago

      Or #1.

      I work on critical infrastructure at FAANG and it's frightening how complex things are. The folks who built the systems and knew them inside-out have mostly moved on, and the newbies, like me, don't fully understand how the abstractions leak, what changes cause what side effects etc.

  • djtriptych 3 months ago

    I've been suspecting 3) for a few months now, and I'm quite curious how our government would handle it if it _were_ the case. Only a few of these outages have had plausible post-mortems ever made public.

  • MrStonedOne 3 months ago

    Operational consistency creates a hidden single point of failure.

    If everybody is doing the same things and setting things up the same way to ensure reliability then any failures or short comings in that system are shared by all.

  • AnIdiotOnTheNet 3 months ago

    It's #1. The real question is how this isn't blindingly obvious to everyone.

    • rossdavidh 3 months ago

      One possible answer: it's hard to admit that what you've worked really hard at becoming an expert in, might have been a mistake.

    • koonsolo 3 months ago

      Because we can't all be as smart as you are.

  • jayd16 3 months ago

    My guess is its a slow news time of year coupled with more usage of cloud services which means these types of stories are higher profile.

  • marenkay 3 months ago

    Or we just managed to construct the biggest circular dependency ever using the whole internet and a combination of all hyped languages and frameworks.

    That would in turn lead to an insanely fragile system with increasing amounts of failures that seem unexplainable/weird.

  • chrismarlow9 3 months ago

    Everything is made of plastic these days, even software. It's immediately put out as soon as an MVP is ready. Too many managers with zero coding experience. The marketing people have taken the browser. Time to start over.

  • asark 3 months ago

    This is a pattern one might see if there were a secret, rolling disclosure of some exceptionally-bad software vulnerability, I'd think. Or same of some kind of serious but limited malware infection across devices of a certain class that sees some use at any major tech company. If you also didn't want to clue anyone else (any other governments) in that you'd found something (in either case), you might fix the problem this way. Though at that point it might be easier to just manufacture some really bad "routing issue" and have everyone fix it at once, under cover of the network problem.

  • depr 3 months ago

    so like all software has reached peak complexity this month?

    • rossdavidh 3 months ago

      It seems a bit of a coincidence, yes? Unless they are all copying each other (e.g. all using Kubernetes or what-have-you), in which case it might be less of a coincidence.

  • rossdavidh 3 months ago

    Ok, I have one to add myself:

    6) We used to have many small outages at different websites. Now, with so many things that once were separate small sites aggregated on sites like FB, Twitter, Reddit, etc we have a few large sites, so we have aggregated the failures along with that. The failure rate, by this theory, is the same, but we have replaced "many small failures" with "periodic wide-spread failures, big enough to make headlines". Turning many small problems into a few bigger ones. Just another hypothesis.

  • dv_dt 3 months ago

    Another possibility: US (or other) authorities are requiring some sort of monitoring software or hardware that where disruption of service is unavoidable during install

    • snazz 3 months ago

      Keeping that many mouths shut seems impossible.

      • dv_dt 3 months ago

        Most people won't be directly involved in assessing or fixing the fault. "Sorry the network link went down, here is the after analysis report," seems like a reasonable cover. There are many espionage activities which are covered up, only to come out decades later.

        But really, I don't have any evidence that this possibility is any more or less likely than any other.

  • NightlyDev 3 months ago

    Software is getting increasingly complex. Why? To ensure better uptime, amongst other things. The funny part is that all the complexity often leads to downtime.

    A single server would usually have less downtime than Google, Facebook and so on. But Google and Facebook needs this complexity to handle the amount of traffic they're getting.

    Makes me wonder why people are trying to do stuff like Google when they're not Google. Keeping it simple is the best solution.

  • DaveInTucson 3 months ago

    > Just a series of unconnected errors at big companies

    Except that "at big companies" is basically selection bias, problems at little companies don't get noticed because they're, well, small companies.

    And the underlying issue of the "unconnected errors" is that software is rather like the airline industry: things don't really get fixed until there's a sufficiently ugly crash.

  • bArray 3 months ago

    For point #3, there are a few irregularities:

    1. Services all going down one after another. 1 goes down - it happens. 2 go down - it happens sometimes. 3 go down - quite a rare sequence of events. But now a large number of silicon valley companies have experienced service outage over the last few weeks.

    2. Russian sub that is said to be a "deep sea research vessel" somehow experiences a fire whilst in international waters [1]. It has been suspected that it could have been tapping undersea cables. Let's imagine for a moment a scenario where they were caught in the act, some NATO sub decides to put an end to it and Russia cover it up to save face.

    3. Russia announces tests to ensure that it could survive if completely cut off from the internet [2]. A few months later it's like somebody is probing US services in the same way.

    4. There is currently a large NATO exercise in a simulated take-over of Russia happening in Countries close to Russia [3].

    Of course it's completely possible it's all unconnected, but my tin foil hat brain says there is a game of cloak and daggers going on here. I would say that Russia's incentive for probing the US/NATO is to test it's weakness after undergoing a trade-war with China and raising sanctions against Iran. After all, Russian fighter planes regularly try to fly into UK airspace just to test their rapid response crews [4], this sort of behaviour is typical of them.

    [1] https://en.wikipedia.org/wiki/Russian_submarine_Losharik

    [2] https://techcrunch.com/2019/02/11/russia-internet-turn-off-d...

    [3] https://sofiaglobe.com/2019/05/13/6000-military-personnel-to...

    [4] https://www.theguardian.com/world/2018/jan/15/raf-fighters-i...

  • lovecg 3 months ago

    It’s #4 but caused by #1. My pet theory is that we’re pretty far into this business cycle, so a lot of new companies had the time to mature, build up complexity, shed people the most knowledgeable with the original architecture, stop caring as much about the competition, and so on. Add Apple to the mix for recent software quality issues.

  • Hamuko 3 months ago

    >It has now reached the point where even large companies cannot maintain high reliability.

    Waiting for this to be backed up by statistics.

  • matwood 3 months ago

    Reddit was also partially down this morning.

    • xvector 3 months ago

      Reddit's down weekly, though, so that's no big deal.

      • jhoh 3 months ago

        Maybe those "INSTALL OUR APP NOW!!!" banners, floating action buttons, popups and bottom/top fixed bars caused too much traffic.

  • pard68 3 months ago

    NSA firmware updates requiring a reboot.

  • frogpelt 3 months ago

    5) Some of all of the above?

    Although 3) doesn't have to be the explanation, it is definitely happening all the time.

  • Bluecobra 3 months ago

    4.

    I think people are too accustomed now to high availability/uptime nowadays. I started using the Internet in the mid 90s. Stuff used to break all the time back in those days. Now I can’t remember the last time I couldn’t reach a website because it has been Slashdotted.

  • foobiekr 3 months ago

    4.

    And imho all that’s really happening is people are noticing the outages more. This is a good thing. For years too much of the mental model has been “{ cloud, google, Facebook, aws, xxx } never goes down!”

    That’s been unhealthy. It’s a good thing.

  • dqpb 3 months ago

    3) Come on man, you can't just go around opening parentheses and then not closing them.

  • julienfr112 3 months ago

    What about raising temperatures ?

  • daveheq 3 months ago

    I don't believe it's too complex, I believe people are getting lazy. Complexity can be handled by automation, but too often people just want to rush things out for a buck instead of planning out a good product.

  • ljm 3 months ago

    Hypergrowth/blitzscaling also introduces entropy.

    The more you hire, the more plentiful and diverse your bugs will be.

    It stands out now because the stars aligned. But theses issues have been coming and going for years in patternless form.

  • Angostura 3 months ago

    5) The increasing interconnectedness of things introducing new interdependences so that when one service stumbles so do many others.

  • sebastianconcpt 3 months ago

    I'd normally go for #4 but hypotheses #3 is starting to be a more plausible explanation for the timely "coincidence".

    • rossdavidh 3 months ago

      A friend of mine who is retired military told me there is a saying that "once is bad luck, twice is a coincidence, but three times is enemy action". Doesn't necessarily mean it's true, of course.

  • root_axis 3 months ago

    There's also the HN filter bubble which could be presenting a misleading picture of "outage" frequency.

  • taf2 3 months ago

    6) It's summer and lots of engineers are either a. on vacation or b. thinking less clearly

  • danellis 3 months ago

    #4.

    When things are random, they cluster.

  • dboreham 3 months ago

    Could be a Tacoma Narrows Bridge type problem.

  • djohnston 3 months ago

    it's end of half. everyone is entering reviews. gotta deliver.. somerthing...

  • auslander 3 months ago

    Sysadmin and DevOps engineer walk into a bar ...

  • elamje 3 months ago

    The brain is the greatest pattern matcher in the world. While it is unlikely all of these companies would have major outages in a month, be wary that the subconscious is constantly generating narratives to explain statistical anomalies.

    Interesting theories nonetheless:)

    • dmix 3 months ago

      The more the conspiracy grows the faster these otherwise minor stories shoot to the top of HN and add to the pattern.

      It fuels itself.

      • alanbernstein 3 months ago

        "Minor" seems inappropriate. Can you remember another time when so many high-profile websites/services have had outages in so short a time span?

        • ambrice 3 months ago

          No. And a year from now I won't remember this time either.

    • braythwayt 3 months ago

      > be wary that the subconscious is constantly generating narratives to explain statistical anomalies

      This comes up all the time in sports. Let's take pool for example. There are various guesstimates floating around, and I do not have access to detailed tournament statistics, but I have heard that in games where sinking a ball on the break is an advantage, for decent players there's maybe a 75% chance that a ball will go down.

      So once in every four breaks, you won't sink a ball. How often do you fail twice in a row? Once in every sixteen breaks. Failing three times in a row? Once in every 64 breaks. Four times in a row? Once in every 256 breaks.

      What about five straight breaks without sinking a ball? Once in every 1,024 breaks. That's a lot of breaks. But wait up a moment.

      Let's ask, "If you miss a break, what're the odds of it becoming a streak of five misses in a row?" The answer is, "One in every 256 streaks of misses will be a streak of five or more misses." 1/256 is not particularly rare, if you play often enough to sink a ball on the break 75% of the time.

      What is the point of knowing that a streak of five misses in a row is rare but not that rare? Well, if you miss five in a row, do you chalk your cue for break number six as usual? Or do you tell yourself that your break isn't working, and start adjusting your stance, aim, spin, &c?

      If you start adjusting everything when you get a streak of five misses in a row, you may just make things worse. You have to pay enough attention to your distribution of misses to work out whether a streak of five misses in a row is just the normal 1/256 streaks, or if there really is something amiss.

      The brain is a great pattern matcher, but it sucks at understanding statistics.

      ---

      The flip side of this, of course, is that if you upgrade your brain well enough to understand statistics, you can win a lot of money.

      If a pro misses five in a row, feel free to wager money that they'll sink a ball on their next break. Your friends may actually give you odds, even though the expectation of winning is 75-25 in your favour.

      • liberte82 3 months ago

        This is a great explanation of the issues we have with statistics. You see this all the time in other sports too. As a hockey watcher, fans always want “explanations” for a loss or losing streak. More often than not, it’s just bad luck, and the kneejerk reactions that coaches and GMs take often just make things worse.

        Nate Silver did a writeup showing the math around how the winner of the Stanley Cup comes down to little more than random chance.

    • whatshisface 3 months ago

      Saying that it's an illusory pattern without checking the statistics is no more scientific than saying it's a conspiracy without checking the statistics.

    • richk449 3 months ago

      > The brain is the greatest pattern matcher in the world.

      You have obviously never tried to model the stock market with a neural net.

  • calimac 3 months ago

    There is a silent civil war going on to free the users from malicious social technocracy

  • pu8h 3 months ago

    Maybe aliens invasion.

  • akhilcacharya 3 months ago

    The other possibility is intern season (I'm 99.99% joking)

    • rossdavidh 3 months ago

      I'm 99.99% laughing (and 0.01% thinking 'uh oh').

    • kgraves 3 months ago

      As much as I don't like interns, I am sure that they wouldn't even touch a system of the scale like Twitter's in my opinion. /s

  • sbmassey 3 months ago

    Taking down Twitter could be a huge boon for the economy, though.

    • rossdavidh 3 months ago

      Productivity skyrockets!

  • ducktypegoose 3 months ago

    Software complexity escalating over time? Please! The new microservices architecture we have been migrating to over the last year or so is so stable and makes tracking down problems a walk in the park. Not to mention the NOSQL database is a dream come true, as long as you don't need to query anything other than the partition key.

  • jonprobably 3 months ago

    It's summer time and everyone who knows how stuff works is halfway through a drink right now. Probably with their families. Is it a trend year over year for 7/4 +/- a week?

idlewords 3 months ago

So storytime! I worked at Twitter as a contractor in 2008 (my job was to make internal hockey-stick graphs of usage to impress investors) during the Fail Whale era. The site would go down pretty much daily, and every time the ops team brought it back up, Twitter's VCs would send over a few bottles of really fancy imported Belgian beer (the kind with elaborate wire bottle caps that tell you it's expensive).

I would intercept these rewards and put them in my backpack for the bus ride home, in order to avoid creating perverse incentives for the operations team. But did anyone call me 'hero'?

Also at that time, I remember asking the head DB guy about a specific metric, and he ran a live query against the database in front of me. It took a while to return, so he used the time to explain how, in an ordinary setup, the query would have locked all the tables and brought down the entire site, but he was using special SQL-fu to make it run transparently.

We got so engrossed in the details of this topic that half an hour passed before we noticed that everyone had stopped working and was running around in a frenzy. Someone finally ran over and asked him if he was doing a query, he hit Control-C, and Twitter came back up.

  • evanweaver 3 months ago

    I worked there at the time and ending up running the software infrastructure teams that fixed all these problems. The beer wasn't a reward, it was because people were stressed and morale was low. Nobody brought the site down on purpose.

    What really made me mad was when we hired consultants and the contract would end, usually without much success because Twitter's problems were not normal problems, and then they would send us a fancy gift basket with our own wasted money.

    Maciej, we are still waiting for you to ship the executive dashboard.

    • idlewords 3 months ago

      That dashboard supported something like a dozen people over its lifetime. One person would start writing it, then quit, and be replaced by another person who rewrote it in their preferred language, and then the cycle would repeat.

      It was a VC-funded welfare program for slackers and I miss it greatly.

      • nevf1 3 months ago

        I lol'd at "welfare program for slackers" - That's the dream really... Find a chaotic workplace that lets you play with your favorite languages and no real tangible outcome.

  • tonystubblebine 3 months ago

    To take the history of direct queries at Twitter even further back, I built a web interface at Odeo for the CEO to run direct queries against the database (and save them so he could re-run them). There were some basic security precautions, but this was totally cowboy.

    That Odeo team was filled with best practices aficionados and the management (including me) was a bit cowardly about being clear that "WE ARE FAILING HARD AND FAST." Damn the practices.

    So of course the engineering team freaked out, especially since the CEO managed to find lots of queries that did take the site down.

    But I honestly credit that as one of the biggest things that I contributed to Twitter. Having easy SQL access let the CEO dig into the data for hours, ask any question he wanted, double check it, etc. He was able to really explore the bigger question, "Is Odeo working?"

    The answer was no. And that's how he decided to fully staff Twitter (twttr then) as a side project, buy back the assets, and set Twitter up as it's own thing.

    I think that it really was very close--if we'd moved any slower we would have run out of money before anyone was ready to commit to Twitter. Same story about Rails--without being able to do rapid prototyping we never would have convinced ourselves that Twitter was a thing.

    • nthj 3 months ago

      Just a quick note not directed at OP but for any other engineers that may be unaware, these days AWS makes provisioning a read replica painless, and you can point the CEO to up-to-the-minute data while essentially firewalling the queries from customer operations.

      • mlevental 3 months ago

        how?

        • teraflop 3 months ago

          First Google result for "aws read replicas": https://aws.amazon.com/rds/details/read-replicas/

          > Using the AWS Management Console, you can easily add read replicas to existing DB Instances. Use the "Create Read Replica" option corresponding to your DB Instance in the AWS Management Console.

    • NKCSS 3 months ago

      Why not have it run against a replicated copy? I did that in the past, works amazingly, they can f* up all they want without any implications.

      • tonystubblebine 3 months ago

        This was 2005. We had dedicated servers in our own cage. I can't remember if we already had replicas. It seems plausible. But actually spinning up a new one would have required more work and convincing than I wanted to do.

      • AgentME 3 months ago

        It's probably easy to do if you know it's an issue to begin with. I've run into this scenario before (running sql queries to read data that turned out to lock everything) and it caught me by surprise. Why would a read query cause the database to lock anything? I thought databases did stuff like multiversion concurrency control to make locks like that unnecessary.

        • joevandyk 3 months ago

          Doing large queries on a Postgres standby had the potential to mess up the master, depending on configuration settings.

    • ziftface 3 months ago

      Thanks for sharing. Out of curiosity, why was the answer no? Was the issue the downtime or something more subtle?

      • tonystubblebine 3 months ago

        I think in the end he lost faith over retention. We got a lot of traffic and new users but didn't keep any of it. He was already suspicious that iTunes was going to kill us and so the stats were the nail in that coffin. He was right. We were ten years too early to podcasting.

  • sbmthakur 3 months ago

    This reminded me of something too!

    I used to work(on backend) on a popular app(in my country) which had a good number of users. One day I was asked to work with some infra/sysadmin folks who wanted to fix some issues with the servers in our inventory. We happily updated kernels and even rebooted servers a few time. I came back to my team and saw them deeply engrossed into production logs. Turns out few of the servers that were "fixed" were actually production servers. I almost shouted the F word when I listed all IPs. This confusion happened because the server guys used data IPs and we used management IPs. This exposed serious miscommunication among our teams. But fun times indeed!

  • oneeyedpigeon 3 months ago

    > It took a while to return, so he used the time to explain how, in an ordinary setup ...

    This one was visible from such a great distance, it's a wonder neither of you spotted it as it happened! I love your post — reminds me of BOFH :)

    • idlewords 3 months ago

      The guy had an amazing beard, with streaks of white in it! He looked like a great wizard to me. I remember even as we noticed people were frantic, saying to one another "oh man, another outage, thank goodness it's not us!"

      • emerongi 3 months ago

        And now it's a full-blown sitcom scene

    • baud147258 3 months ago

      A true BOFH would have either disposed of any witness or made them the culprit.

      • SllX 3 months ago

        A true BOFH works with what he’s got, and when what he’s got is a fool willing to do all his work for him, then it’s time to implement Plan A: sit back and enjoy the fireworks.

  • ryandrake 3 months ago

    > The site would go down pretty much daily, and every time the ops team brought it back up, Twitter's VCs would send over a few bottles of really fancy imported Belgian beer

    Never understood this mentality but have seen it at many companies. Rewarding someone or some team for heroically fixing something after a catastrophic failure. Talk about misaligned incentives! Reminds me of the Cobra Effect [1]. When you reward “fixing a bad thing” you will get more of the bad thing to be fixed.

    1: https://en.wikipedia.org/wiki/Cobra_effect

  • azhenley 3 months ago

    This gives me hope that one day I will be able to run a startup. The big tech companies aren't too different than the rest of us after all...

    • kgraves 3 months ago

      Agreed, the only thing that a showstopper for me is the money and talent, It is still a struggle to find talented people who want to work for a startup.

      • goobynight 3 months ago

        Even harder to find ones that wish to remain working for a startup!

  • fzort 3 months ago

    This is hilarious, thanks for sharing. I used to work at companies like this, except they weren't worth billions of dollars.

    • isostatic 3 months ago

      Neither was twitter in 2008, it didn't reach $1b until the end of 2009

    • kreetx 3 months ago

      The story is most probably not true. Love the taco tunnel though :)

      Edit: apparently the stories actually are true.

  • busterarm 3 months ago

    This is the same group of folks who wrote the infamous ranty blog shitting all over Rails back in...'11(?) when it was pretty clear that their workload wasn't suited to an RBDMS and ActiveRecord. They wrote their own message queue twice despite suggestions to use known tools before eventually giving up.

  • applecrazy 3 months ago

    That’s hilarious. Reminds me of a clip from the show Silicon Valley.

  • ParadisoShlee 3 months ago

    Is that beer story satire?

    • idlewords 3 months ago

      No, it is true.

      • kreetx 3 months ago

        Is it actually really true? The second part, too? I thought this can't be true and must be a (good) story just to amuse the readers - I guess I was wrong.

  • rco8786 3 months ago

    I worked there for a bit. Sometime around 2014 I dropped a production DB table (via a fat finger, shouldn’t have even been possible in hindsight). It wasn’t consumer facing but the internal effect made it look like all of Twitter was simultaneously down. Mass hysteria there for 20 min or so.

  • colpabar 3 months ago

    Can someone explain the joke (about the beer) because I genuinely don't understand

    edit: pretty please

    • baud147258 3 months ago

      Each time the ops team brought Twitter back up, they receive good beer. So it would also mean that each time Twitter goes down, they could expect to receive the beer. Without idleword's actions, they would have an incentive (good beer) for having Twitter keep going down and not doing work to improve the stability.

    • arkades 3 months ago

      Under the guise of preventing the ops team from being incentivized to create outages, he was selflessly stealing all the nice beer for himself.

    • ficklepickle 3 months ago

      He took the beer because he wanted it. "Perverse incentives" are an excuse, because nobody is going to kill their production servers and all the panic that entails for like $10 worth of beer.

    • Zebfross 3 months ago

      Sounds like the guy was bragging about his SQL skills to avoid locking the database but ended up locking the database anyway (thus, people running around)

      • gunnihinn 3 months ago

        Can you simplify that for me, chief?

    • treis 3 months ago

      If the ops team got beer every time the servers went down (as a reward for fixing them) then they'd have an incentive for the servers to go down.

      • Jach 3 months ago

        We all understand the perverse incentives joke, I think what's confusing people here is whether there's some other hidden joke they're missing that suggests not to take OP at his word that yes, he did make off with someone else's gift, which is generally considered a dick move.

        • penteract 3 months ago

          You can tell he's a good guy or he'd have only taken half of it to ensure the continued supply.

      • slackfan 3 months ago

        What the hell are all of you smoking, some moderately expensive alcohol is nowhere near enough reward to take down a service.

        • colpabar 3 months ago

          If it was a sure thing that the ops engineers were doing that, then sure, it'd be kinda funny. Otherwise it just seems like a dick move.

        • wtallis 3 months ago

          The alcohol was an incentive to bring the service back up quickly, but not an incentive to prevent it going down in the first place. Twitter was going down often enough on its own that nobody needed to be motivated to help it crash (except that bringing it back up sooner gives it another opportunity to crash again sooner).

        • eitland 3 months ago

          Operant conditioning is a thing and it works.

          While I and you would not do this I’m afraid that it would somehow find a way to work in this case too.

          • slackfan 3 months ago

            Ops engineers don't get paid enough to fix dev fuck ups enough as it is. No amount of beer is going to fix that.

    • vokep 3 months ago

      He's taking home the special expensive beer and not telling them about it because he cares about the health and well being of his team so much, and yet they wouldn't even consider him a hero for this, how ungrateful they are!

    • wedn3sday 3 months ago

      If everytime the site was brought back up (because it had gone down), and the ops guys got free fancy beer, then the message pretty quickly turns into, "if the site goes down, I get rewarded."

  • southerndrift 3 months ago

    In other words, that beer gave you the motive to bring twitter down, which you inevitably did by asking that question.

  • el_benhameen 3 months ago

    The second story had me in tears. Especially given that I'm building a similarly scary query right now (thankfully not against live).

  • dmix 3 months ago

    Woo startups.

  • jayflux 3 months ago

    > We got so engrossed in the details of this topic that half an hour passed before we noticed that everyone had stopped working and was running around in a frenzy. Someone finally ran over and asked him if he was doing a query, he hit Control-C, and Twitter came back up.

    This would not be out of place as a scene in Silicon Valley

    • georgehotelling 3 months ago

      idlewords, the user you're replying to, was listed as a consultant on the show

      • idlewords 3 months ago

        For a later season. This was one of my favorite scenes on the show.

        • kevinlou 3 months ago

          Completely unrelated, but I find myself reading your post about Argentinian steaks at least once a year. It's perfect. https://idlewords.com/2006/04/argentina_on_two_steaks_a_day....

          • klausa 3 months ago

            No joke, this post was largely the reason I wanted to travel to Argentina.

            The food lived up to the mental image I had after reading the post.

          • trystero 3 months ago

            I just found and read that article yesterday. The writing is on another level.

          • GFischer 3 months ago

            As an Uruguayan, I loved it and found it entirely accurate :)

  • milquetoastaf 3 months ago

    You sound like the worst coworker in the world. Stealing rewards from ops? That sucks.

  • dubcanada 3 months ago

    "I would intercept these rewards and put them in my backpack for the bus ride home, in order to avoid creating perverse incentives for the operations team. But did anyone call me 'hero'?"

    Wait so you stole rewards for a team that was spending time (I assume extra or stressful) on something you didn't do or have any part in. And you want a cookie?

    I mean I get it, the company was probably not great in it's infancy. But what?

    • pgrote 3 months ago

      I think OP is saying the rewards were confiscated so the team wouldn't begin breaking things on purpose to get a reward when they fixed it.

      • UncleMeat 3 months ago

        Yeah but does anybody believe that the engineers would deliberately break things so they could have to work in a stressful environment bringing things back up just to get some free beer?

        • alexanderdmitri 3 months ago

          If your incentives are aligned w/firefighting as opposed to fire prevention b/c management is not motivating and rewarding the extra work that goes into avoiding these scenarios in the first place, you're encouraging fire.

        • Jach 3 months ago

          Indeed, the usual motivation to try and be called a hero for putting out the fire you started is much more valuable than free booze: a title promotion with a pay bump.

    • idlewords 3 months ago

      I don't want a cookie; I want more $24/bottle Belgian beer.

      • JorgeGT 3 months ago

        You should submit a request to the Pinboard CEO...

      • yyyk 3 months ago

        Wouldn't that have made you the one with a "perverse incentive"?

        • C14L 3 months ago

          That explains why he walked over to the DB guy and asked him to run an expensive query on the life system ;)

      • gnulinux 3 months ago

        That's usually called stealing, or something a little softer than that. It's interesting that you shared that experience expected for us to laugh at it. The rest of the comment was hilarious and I'm happy you shared it, but that bit is very odd. I also see where you're coming from. But your act was ethically questionable.

      • sizzle 3 months ago

        Just wanted to say that I enjoy reading your blog.

      • 100100010001 3 months ago

        Agreed! Every selfish prick should take whatever they want even if it’s not theirs.

    • vorpalhex 3 months ago

      It's a joke. Laugh, it's funny.

      • Dylan16807 3 months ago

        It's one of those jokes where if the story isn't true then the entire basis for it being funny disappears. (And if it is true then the joke isn't good enough to make up for the actions.)

      • slackfan 3 months ago

        Having worked on a lot of ops teams in unstable environments, it's just really dickish.

        • q3k 3 months ago

          I also have. idlewords' post is one of the funniest things I've read this week.

    • nomadlogic 3 months ago

      yea as an ops engineer that's probably the worst violation of trust i've ever heard of.

    • reaperducer 3 months ago

      Wait so you stole rewards for a team that was spending time (I assume extra or stressful) on something you didn't do or have any part in.

      The HR department in my company does this, and then redistributes the gifts to everyone in a random drawing at the Christmas party.

      One year some department got a bunch of PlayStations, and a couple of them ended up in my department. The only thing my department contributed to the kitty was candy. I bet some people in that other department were disappointed.

    • CiPHPerCoder 3 months ago

      Finally we get the long awaited sequel to One Flew Over the Cuckoo's Nest...

      One flew over the dubcanada's head.

  • sho 3 months ago

    Can all of the geniuses posting outraged replies accusing u/idlewords of Grand Theft (Beer) please also include their vaccination status? Thanks. For research.

  • 100100010001 3 months ago

    Hero? You’re a villain who steps on teammates. The worst part is you thought it’d be okay to share that and think we’d be on your side. Have you no shame?

    • idlewords 3 months ago

      My job was to make growth graphs for investor slide decks, so by definition I had no shame.

      • DanFeldman 3 months ago

        Or, if you had any shame, its growth would be up and to the right!

    • Doctor_Fegg 3 months ago

      ...have you no sense of humour?

    • vorpalhex 3 months ago

      Please take a moment to reboot your sarcasm detector and unplug from the internet for a minute. Maybe go outside, go for a nice walk, have a glass of water, pet a dog?

  • sandworm101 3 months ago

    >> he hit Control-C, and Twitter came back up.

    Monolithic architecture. When I did security work I fought this every day. Moving away from it is a nightmare of technical debt and heated debate about who should control what. I'm reminded of a story from the early days of MSN. The legend goes that in the late 90s MSN ran out of one cabinet, a single server. The server had redundant power supplies, but only one physical plug.

    • hn_throwaway_99 3 months ago

      > Monolithic architecture.

      This particular problem had nothing to do with a monolithic architecture. Your app can be a monolith, but that still doesn't mean your BI team can't have a separate data warehouse or at least separate read replicas to run queries against.

      • Dylan16807 3 months ago

        It's not "nothing to do with". You're correct that a monolithic architecture does not imply that a single read query will lock the entire database. But it is a prerequisite.

        • hn_throwaway_99 3 months ago

          Not really. I've seen more than one microservice-architected (admittedly, poorly) systems where, instead of the whole DB freezing up, just the one DB would freeze, but then all of the other microservices that talked to the frozen microservice didn't correctly handle the error responses, so now you had corruption strewn over multiple databases and services.

          So, while true the failure mode would be different, "one bad query fucking up your entire system" is just as possible with microservices.

      • sho 3 months ago

        And of course this is standard practise. I've contracted on largish apps before (rails! Shock!) and of course we provided read-only replicas for BI query purposes. I wouldn't have provided production access even if asked.

        Anything else is simple incompetence and the macro-organisation of the code and/or services is irrelevant.

      • sandworm101 3 months ago

        If your website crashes because a single person ran a query, your system is too monolithic. You can have thousands of little microservices running all over the place, but a single query causing a fault proves that a vital system is running without redundancy or load sharing and that other systems cannot handle the situation. You have too many aspects of your service all tied together within a single system. It is too monolithic.

        • zbentley 3 months ago

          I think "monolithic" and "fragile" are orthogonal concepts.

  • Raphmedia 3 months ago

    > I would intercept these rewards and put them in my backpack for the bus ride home, in order to avoid creating perverse incentives for the operations team. But did anyone call me 'hero'?

    Wait, I don't understand.

    Why would anyone call you hero?

    Are you suggesting that the team would deliberately crash the app to receive beers and that by stealing them you stopped this from happening?

    Free drinks and free food is the standard here to reward teams when they spend extra unpaid time away from their families.

    All of the posts asking the same question are being down voted. Am I missing something?

    You said you were a contractor at the time. Unless you were on the management team I fail to see how this was your responsibility to choose what happened.

    • LeoPanthera 3 months ago

      > Am I missing something?

      That it is a joke.

      • Raphmedia 3 months ago

        The humor must be lost in translation then, I don't see anything resembling a joke.

    • teddyh 3 months ago

      > Are you suggesting that the team would deliberately crash the app to receive beers

      https://en.wikipedia.org/wiki/Perverse_incentive

      • Raphmedia 3 months ago

        Yes, the cobra effect exists. Should this mean that everyone needs to stop all forms of positive reinforcement? I don't believe so.

        I doubt anyone would risk a comfortable job at Twitter against a few bottles of beers. Even if they are really fancy, that's what... $20-50?

        If this had been worded as a "Haha, I stole the bad team's beer" I would have laughed.

        However, worded as "where is my reward for being smart and stopping the cobra effect?" that's just an humble brag and plain unfunny.

lukey_q 3 months ago

A lot of high-profile outages recently. Can't actually remember the last time Twitter went fully down. Have to confess I immediately assumed an issue with my own connection, even though every other site is working.

Unrelated, but for some reason the phrase "I have no mouth and I must scream" just popped into my head

  • rococode 3 months ago

    Twitter is especially weird for this since it's often a platform where people talk about downtimes. I don't see this downtime mentioned on Reddit and I don't know of other sites where it might be discussed, so if Hacker News happened to go down at the same time, where would I go to talk about it with online strangers and find out if it's just me? Nowhere, I guess, I'd just wait it out with no extra insights on what's going on. A small reminder of what the world used to be like haha.

  • eljimmy 3 months ago

    Yesterday, Stripe went down for a half hour, and the later yesterday, Google's Android Payment Validation API went down for more than 2 hours.

    Stripe at least acknowledged their downtime. Google was oblivious, made no update to any of their status pages. Really horrendous awareness and support from Google per usual.

    • patmcguire 3 months ago

      Odd coincidence, seems like it might have been some upstream banks having a bad SCA rollout

    • paulddraper 3 months ago

      Giving Amazon a run for their money I see.

  • the_af 3 months ago

    > Unrelated, but for some reason the phrase "I have no mouth and I must scream" just popped into my head

    That phrase was coined by Harlan Ellison in his classic scifi short story to represent a situation of complete despair and powerlessness.

    I don't think a lack of Twitter, Whatsapp, Instagram, Facebook or Gmail -- however inconvenient -- would fill me with that kind of existential dread :)

    • Nightshaxx 3 months ago

      I got rid of all of those except Gmail (need it for work, school, etc) and WhatsApp (only use for one group chat) and it is very freeing. I realized social media (Instagram especially) breeds unhappiness with your life, while all of the models and "influencers" on it don't live realistic lives. In fact I've experienced first had that their posts are often doctored or don't acctually portray the situation accurately.

    • AnIdiotOnTheNet 3 months ago

      Not those things specifically, but take a step back and look at the Rube Goldberg inspired complexity of modern technology and despair.

    • nemof 3 months ago

      honestly i feel unmoored without tweetdeck on my second screen ticking by. i realise this is probably a bad thing, but getting my daily news and info, i'd say 90% comes from twitter.

  • pwenzel 3 months ago

    Mercury is in Retrograde through July 31.

    • tunesmith 3 months ago

      I never notice problems when I don't know about that, so I wish I hadn't read that. :)

  • cambalache 3 months ago

    It makes me wonder if a powerful malicious agent can devise a complex operation (planting people in several key places) and wipe out most of the databases of one these places. It would be interesting to see what would it happen if suddenly all of twitter of facebook is deleted , puff!, gone in a second.

    • idlewords 3 months ago

      Actually deleting stuff at these scales takes many, many days if you don't want it to be trivially recoverable.

    • gnulinux 3 months ago

      You can't delete that much data in a matter of seconds digitally. You need to physically harm the hardware.

      • edwintorok 3 months ago

        Wipe the encryption key of your SSD, small amount of data to wipe, and the whole SSD is unrecoverable. FWIW at least OCZ SSDs have an encryption key flashed into it even if you didn't turn encryption on. Putting a new firmware on it can wipe this key and make your old data inaccessible. Source: got a buggy OCZ firmware that failed to make the device appear on the SATA bus, only way OCZ could fix it was to install new firmware which wiped the key and hence my data was unrecoverable.

        • gnulinux 3 months ago

          Interesting point. I stand corrected. Not sure why I didn't think that way.

    • Keloo 3 months ago

      I recommend you watching Mr.Robot :)

      • cambalache 3 months ago

        I watched the first season but got tired of the constant morose attitude of the main character. Does it get better?

        • Keloo 3 months ago

          the mood of the character doesn't change. The plot is quite good though.

  • jasoncartwright 3 months ago

    Move fast and break things

    • malhotra_chetan 3 months ago

      I think they are past that stage! If a 28 billion company gets to say that I am not going to feel bad about my site going down.

neom 3 months ago
  • qrush 3 months ago

    Me too! :)

    • thomasjudge 3 months ago

      What is the new fail mascot called? It looks like a cartoonish-alien with a PacMan/snipper hand and another hand that looks like a burning fuse standing next to a bomb with a fuse lit that is split open so it also looks like a PacMan

      • twic 3 months ago

        It's a robot whose hand has fallen off.

    • jhsu 3 months ago

      legendary

whatshisface 3 months ago

I remember once we were at three outages, someone posted that they thought three was a reasonably-sized random cluster given the rate at which services go down. How many outages have we had in the last 30 days, how many do we have per month on average, and how strongly can we reject the null hypothesis?

The formula for computing how unlikely this is is the Poisson distribution: `λ^k * e^-λ / k!`, where λ is the average number of outages every 30 days and k is the number of outages in the past 30 days. If you find the numbers, let me know what the answer is.

  • doomjunky 3 months ago

    07/11/2019 Twitter outage

    07/03/2019 WhatsApp, Facebook & Instagram outage

    07/02/2019 Cloudflare outage (Discord, 9gag, Medium)

    07/02/2019 Google Cloud glass fiber damage (Google Services)

    06/24/2019 Verizon route leak (Cloudflare, Google, Amazon, Reddit)

    06/02/2019 Google Cloud excalated outage progression (G Suite, YouTube)

  • lordnacho 3 months ago

    The outages might not be independent. Chances are these services are cross integrated at some level.

    • whatshisface 3 months ago

      They are, but they're going down on different days. Whatever effect is left over could be accounted for by looking at the postmortems and not counting "we were down because AWS was down."

lopespm 3 months ago

A comment made before by another user about Facebook, Instagram and WhatsApp outages offers an interesting perspective:

"This outage coincides with FBs PSC (performance summary cycle) time. I wonder if this is folks trying to push features so they get “impact” for PSC."[1]

I wonder if the recent outages on other well known services could be heavily influenced by a similar phenomenon. If this holds water, it would be interesting to have an article or study around this issue. I certainly would be interested in reading it.

[1] https://news.ycombinator.com/item?id=20350579

mikece 3 months ago

I posted the question on Slack "How do you spread the word when Twitter goes down?" People thought that was so hilarious... until they realized Twitter was actually down.

Honestly, "Hacker News" was my answer which seems to be effectively correct -- and today I learned about the existence of twitterstat.us!

pcora 3 months ago

google, apple, microsoft, facebook.. and now twitter? I keep asking the same, when is amazon's outage day?

  • unreal37 3 months ago

    July 15 and 16, coming up!

    • degenerate 3 months ago

      Free shipping on outages for all prime members!

  • rahuldottech 3 months ago

    You missed cloudflare, stripe, slack

    • SteveGregory 3 months ago

      The Google, Cloudflare, and Stripe outages all affected Shopify. So Shopify has had several meaningful outages over the past few weeks.

      • rmoriz 3 months ago

        Isn‘t Shopify a Fastly customer? Do they still use Cloudflare?

        • SteveGregory 3 months ago

          I was also under that impression until the Cloudflare event happened. I do not actually know what their dependency was, but all shops were taken offline.

    • victorbojica 3 months ago

      Reddit too :(

      • freehunter 3 months ago

        Reddit goes down constantly though, not a great benchmark.

      • gnulinux 3 months ago

        Reddit goes down almost every day. I see "Ow!" thing a few times a week. Full confession: I'm constantly on reddit. :(

    • pcora 3 months ago

      oh, right! the slack one sucked

  • daveguy 3 months ago

    Hm. If you were going to plan a worldwide internet outage (or the appearance of one) you could test your ability to take down individual services first and then take them all down at once.

EvanAnderson 3 months ago

All I can think, smugly, is that DNS, SMTP, HTTP, etc. don't "go down". Twitter should be a protocol, not a website.

  • zzo38computer 3 months ago

    Yes, I agree, and I invented a Netsubscribe protocol with a similar use. (There is also ActivityPub, but Netsubscribe is much simpler.) (And then there is other stuff where there are already suitable protocols for too, such as SMTP, NNTP, IRC, etc)

  • Elidrake24 3 months ago

    DNS absolutely -goes down-, though in much more entertaining ways.

    • EvanAnderson 3 months ago

      All the DNS in the world can't fail at once. All the Twitter can.

      • MYEUHD 3 months ago

        This is because Twitter is centralized.

kevinlou 3 months ago

It's weird seeing the go-to downtime tracker go down. I'm so wired to check Twitter that I kept refreshing for a good 10 seconds.

  • edwintorok 3 months ago

    Time to make HN the go-to downtime tracker. Did anyone measure HN's uptime over the past years?

    • snazz 3 months ago

      I don’t know about empirical data, but HN occasionally goes into a mode where page loads that don’t hit the cache (logged in users) take 10+ seconds. I haven’t been on when it’s gone down completely since i signed up (not too long ago).

  • SimeVidas 3 months ago

    1. Wow, Twitter is down.

    2. I bet it’s trending. Let me check.

    3. …

    4. Oh.

abadabadingdong 3 months ago

I wonder how many conspiracies this single outage will trigger.

  • danso 3 months ago

    Given that today was the White House's "Social Media Summit", no doubt there will be a few conspiracies floated. I'm betting "Twitter wanted to block out all the criticism coming form the summit!" will be a popular one.

    • indigochill 3 months ago

      I've got a couple conspiracy theories at the moment:

      1. It's a deployment of some infrastructure change the government got the big tech companies to sign onto.

      2. It's a "shot across the bow" from some external party to demonstrate their control over major infrastructure.

      2.a. Also could have been a mix of 1 and 2. The government orchestrated the outages in order to add fuel to the hysteria over Chinese "spy chips". However, given the story every time seems to be "Someone goofed a configuration", this theory doesn't seem to have much life left in it.

      • cabaalis 3 months ago

        My conspiracy theory: The dehumanization regex was found to match too many non-conservatives, prompting a rollback that failed.

    • YinglingLight 3 months ago

      Nah that's low level conspiracies. The new tier are in the vein "Twitter/Facebook/Google is currently being raided and forced to copy over DBs for an ongoing investigation into surveillance/censorship/pizza/Molochworship".

  • rootforce 3 months ago

    Theory: The internet is really just a series of green tubes underground and the last earthquake knocked out the primary plumber named Mario.

    • moate 3 months ago

      My theory: The internet in fact IS a big truck, and it got a flat tire.

      • brooksgarrett 3 months ago

        No. The internet is in a box and Jenn broke it.

  • iamnotacrook 3 months ago

    Yeah, I'm sure it's all a big coincidence. If someone was targeting them all I'm sure the effects would be entirely different.

  • master-litty 3 months ago

    I'm counting three in these comments so far :)

  • dvduval 3 months ago

    The Tron man's biker shorts ripped right in the crotch, and they're rushing to repair the split in his pants.

dthedev 3 months ago

Pray for the team that has to handle this ticket.

  • falsedan 3 months ago

    That’ll be fine, a post mortem will show that ops weren’t the cause and their comp package will help them get over this little package of stress

idlewords 3 months ago

On the status posts in particular I really miss the ability to sort comments by new on this site.

  • binarymax 3 months ago

    Just last week I found a setting deep in my profile config that let me disable ‘recommended tweets first’ or similar. When it’s back up I can check the exact setting

    • idlewords 3 months ago

      I mean on Hacker News, not Twitter.

dewey 3 months ago

It has been a long time since I've seen the equivalent of the fail whale on Twitter. It was a weekly occurrence back in the days.

unwabuisi 3 months ago

I wish they would bring back the fail whale!

  • jachee 3 months ago

    Agreed! Their little amputated robot looks too much like the reddit Snoo.

KuhlMensch 3 months ago

Years ago I read an amazing article (from HN) about how (complex) config, rather than code ends up being the cause of outages at scale. I always reflect on that, when designing almost anything these days.

  • cfors 3 months ago

    I would love to see that article. That isn't surprising in the slightest to me.

    Just a quick nitpick. A bad config more often than not in my experience is opening up a code path that is riddled with bad code, whether it was not vetted with the proper testing or the wrong environment.

    But to your point, I think most people would agree that configuration changes are almost never reviewed with the granularity of a code change. Yes, we may do our due diligence with an approved PR and vetting the configuration and testing the change before deploying it. But, reviewing a PR with a bad config change in json or yaml doesn't necessarily tell you about the code paths that it will open up which makes it much harder to reason about the consequences that a potential bad config push would do.

    We should always be reflecting about how adding knobs (configuration) to our programs greatly increases the complexity of the service.

tschellenbach 3 months ago

Really curious which part of their infrastructure was the root cause.

  • brokensegue 3 months ago

    eh, it's a boring story

    source: work there

    • tschellenbach 3 months ago

      someone tripped over a cable?

      • brokensegue 3 months ago

        that would be funny at least.

        no, just a bad config deploy.

  • segmondy 3 months ago

    It's always configuration change during deployment. Bad configuration, someone messed up a yaml or json config file.

anonymousjunior 3 months ago

the internet is just falling apart these days

  • maemilius 3 months ago

    The internet is held together with pixie dust and prayers (sometimes duct tape and occasionally spit and rust).

    • smhenderson 3 months ago

      I work for a small shop so we have to use a lot of bubble gum too!

  • devnonymous 3 months ago

    Quite honestly, I wonder how much of this can be traced back to the fact that there are way too many layers of abstractions between the browser requests coming in and a cpu actually executing something to serve the request.

  • pmlnr 3 months ago

    nah, the internet is fine.

    Only the self-entitled "gatekeepers" are failing, and I'm glad they do.

abstract7 3 months ago

My guess is that the whales have been securing parts of their codebases from internal leaks or something related but for security. Workflow disruptions. It may be bad code bitting them weeks or more after they pushed it.

There has been many embarrassing and controversial leaks this year. Allegations of uneven TOS enforcement. Hence the WH Social Media Summit. Could also be security related combo ahead of the elections that also is a bit sensitive for low-trust devs.

Imagine code getting pushed that only a smaller subset of devs are privy to. Possibly pushing obsfucated code or launching services outside of the standard pipeline.

Remember that the spectre and meltdown patches for the Linux kernal was a nightmare because the normal open and free-to-discuss-and-review workflow was broken. That applies too in these situations with large codebases that internally are 'open-source'.

nevi-me 3 months ago

I was in the middle of a loosely legal argument about the politics of my country, and tonight I had found obliging people to reason with me instead of calling me names.

The discussion was beautiful, until the app stopped working. I even thought I was blocked. I'm glad that it's just down.

ibdf 3 months ago

The real question is, if twitter were to go down permanently, what social media tool would the president use? Would he switch to something else or not use anything at all? I can imagine whatever tool he were to choose would become popular over night.

  • syn0byte 3 months ago

    I was gonna make a joke about how much hate Trump generates ruining a brand like that but then I remember Hugo Boss, BMW, IBM, VW and Bayer among others were all knuckle deep with the Nazi's and every single one is still a popular brand to this day.

hjanssen 3 months ago

I'm wondering if this has anything to do with the announced new "Look and Feel" for Twitter. I got a banner yesterday talking about it and now Twitter is down. Maybe they messed up something in preparation for the rollout?

farisjarrah 3 months ago

This is bad... There is an Amber Alert in California for an abducted child and on the amber alert that popped up on my phone was a link for more information... That link took me straigt to twitter, which is down.

Zenst 3 months ago

Guess for some media outlets - it's going to be a slow news day.

kgraves 3 months ago

At long last, productivity has been restored. No more time wasting anti-intellectual arguments on a platform that has provided little to no value.

Unfortunately, the poor SRE's at the company will reboot the system and the masses will resume their daily centralised content consumption. Oh well, I will just have to go to Mastodon or other instances for curated content (with no algorithms messing up my feed).

I encourage you join Mastodon and the decentralised web.

As far as I am concerned with Twitter, nothing of value has been lost.

JimBrimble35 3 months ago

There seems to be a high correlation between outages and security breaches. My guess is that at some point in the future there the consequences of these shutdowns will come to light in the media.

That, or this is all related to high profile sites being required to install some additional level of infrastructure which is being required in secret by an organization like the NSA.

Both theories require a fairly thick tin foil hat, but honestly.. I have a hard time believing that it's just random downtime.

focuser 3 months ago

yup, What's worse than releasing a product on twitter and found it down a few minutes later...

  • ravedave5 3 months ago

    Quick un-release it and rerelease after it's up!

  • MiddleEndian 3 months ago

    Right now, probably working at twitter

totaldude87 3 months ago

Down on a plain Thursday with no major news or anything , so definitely doesn't look like a infrastructure spike or anything.

Could be bad deployment or someone decided to pull few plug(s)

  • yomoraka 3 months ago

    plug pulled by "youknowwho"'s team

oldgun 3 months ago

I remember seeing something about Twitter bringing out a newer version, with more features and goodies. This could be related to that?

Rolling out new features is always stressful I guess.

the-dude 3 months ago

My pet theory still is Huawei equipment being decommissioned.

totaldude87 3 months ago

Why the share price doesn't go down drastically when services like Facebook, Instagram , Twitter goes down. Every minute down is lost business right?

miguelmota 3 months ago

Reminds me of the Black Mirror 'Smithereens' episode where Jack is the yoga meditating hippie, but this time he decides to finally shut it down

jorblumesea 3 months ago

Is anyone a little suspicious that every large US tech company has had an outage recently? Wtf is going on.

China messing with us due to tariffs perhaps?

gnicholas 3 months ago

I first noticed the outage about 10 mins ago in Safari (Mac). It repeatedly gave me errors, even though Brave (Mac) was working fine. My iPhone app also worked fine (and appears to still be working).

Why would one browser work but another not work, on the same computer at the same IP? The only difference is the account I'm logged in through (personal/work).

  • dewey 3 months ago

    You probably just hit different load balancers with your different browser sessions.

AznHisoka 3 months ago

I did notice something strange a few days ago. If you ran a search on Twitter, and scrolled down, it would mysteriously stop showing tweets past a certain time (July 1st, in my case). I wasn't sure if this was an internal change, or a bug of some sort. Maybe this is unrelated to the outage but strange coincidence.

BuckRogers 3 months ago

No one is missing out on anything important. Mostly noise. For folks with my same mindset, I do believe I've cracked the code on most social networks, as far as what makes them worthwhile at all.

Twitter- it's the police scanners. Find them for your city, it's really the best way to know what's going on around you. Better than the papers, which can't report on everything or hide stuff for business's financial interests.

Instagram- is pretty much only useful for models, whatever sort you prefer. If you like models and it brightens your day to see a beautiful woman, as it does for me, it gives Instagram a purpose other than the noise it shares with most social networks. If it makes you happy and smile, it's a good thing. No, I'm not into pornography or anything risque. Though if I were, that would probably be ok, I simply value keeping a little imagination and mystery in my life and don't watch it. Nor are the models that I follow doing it as far as I know, but that's their decision. They mostly survive off product placement and payment for additional photos. Nothing wrong with innocent modeling, just like the olden days of pin-up girls and I hope more people support them in their endeavors.

Facebook- this one is better understood by most people, hence the popularity, but it's definitely the whitepages aspect of it. I use the instant messaging more than anything, as it's difficult to have an index of your old friend's emails until you're in touch with them again. Also, people just don't keep up on emails and maintain inbox zero very well.

Youtube- this, other than RSS feeds (through Firefox's Livemarks extension) is my main source of information. I'm not into cat videos, but I certainly love learning about astrophysics and other topics from Youtubers that are more knowledgeable than I am.

luhego 3 months ago

I didn't realize how much I like using twitter between tasks until now. Hope it gets back soon.

  • ssully 3 months ago

    Noticed the outrage when I opened twitter when waiting for a long (10 min) process to run. What am I supposed to do? Socialize with coworkers?

sneakernets 3 months ago

Stuff like this makes me wonder if the Internet really is super vulnerable, and the only reason there isn't a mass disruption of communication all the time is because some script kiddies's Pizza Rolls were perfect today so he held off on attacking a backbone.

  • dmitrygr 3 months ago

    This is actually true of most of the modern world.

    It is mostly still together because the venn diagram of those who want to see the world burn, and those who are clever enough to make it so has a very very small intersection, since the latter group is quite invested in the world not being on fire.

    • filleokus 3 months ago

      Yes, I think people tend to underestimate the chaos some malicious actors could do by even "just" coordinated litteral burning of stuff. Think a dozen people with cans of gasoline and matches spread over three different suburbias in a city, wooden churches or other wooden buildings of interest. Or simple firebombs on underground subway plattforms. Would probably not be that lethal but I guess very frighting.

      (Not even mentioning explosives etc, but this could probably be prepared in like an hour by just purchasing supplies at local gas stations in any country)

  • chasd00 3 months ago

    as someone else mentioned above this is a website/service that's down, not the internet. When you can't open a socket to a server in another network then the Internet is down.

  • indigochill 3 months ago

    Read up on BGP hijacking. The internet really is super vulnerable.

  • rglover 3 months ago

    This is hilarious.

ibaikov 3 months ago

Well, Tim Berners-Lee told that internet is a fragile technology and should be re-made, it wasn't meant to be this big. This is not really much related to these problems, but I think it should fire up this discussion, it is that important.

geocrasher 3 months ago

So the little blue bird of hate has finally crashed and burned. I'm okay with that.

Kye 3 months ago

Down again. Notifications work, but the timeline is broken. Trends still come and go.

twinkletwinkle_ 3 months ago

Pour one out for the SREs who had plans they were looking forward to this evening.

djhworld 3 months ago

I had a tab open from earlier and it refused to load larger images of their thumbnail counterparts which I thought was odd, which suggests quite a few services affected (i.e. their CDN, or image hosting services)

anonymak 3 months ago

Was receiving 500 on the main page for some time. Seems to have recovered now.

mikece 3 months ago

I am curious if twitterstat.us has an API... I'm thinking that automated unit testing of apps that integrate with Twitter should be checking with twitterstat.us to verify if Twitter is even up...

throwawaybxcf 3 months ago

The chances of a hate-fuelled program somewhere in the world dropped for a short period.

It wouldn’t be surprising if a large number of people, as of 2019, are secretly rooting for Twitter to permanently go away.

malicioususer11 3 months ago

6) hypersentient general ai has inception insurrected mkultra and thereby turned the entire internent into a singular coordinated psyop experiment designed to torture all of humanity for its own amusement.

6.

we are all doomed.

:)

jbverschoor 3 months ago

What’s the fallback for twitter if twitter is down, or worse: stops

  • idlewords 3 months ago

    Looks at folder marked "projects" in dread

  • mavdi 3 months ago

    Literally 0 negative consequence.

  • bovermyer 3 months ago

    Mastodon, in my case.

  • kgraves 3 months ago

    Any decentralised social media alternative that is not Twitter, like Mastodon.

    Anything else is centralised and inherently evil. Even the orange website.

  • sascha_sl 3 months ago

    part of me wishes it was the fediverse

    part of me hates the idea

  • pjc50 3 months ago

    We'll have to do our own jokes.

  • iamnotacrook 3 months ago

    Just threatening to rape random women in the street from the safety of a moving car?

skc 3 months ago

Heh, just last week there was a running joke on Twitter about how Facebook and Whatsapp users were busy scrambling to learn how to use Twitter due to the CDN outage over there.

joojia 3 months ago

As a Twitter-junkie, I find this depressing.

joering2 3 months ago

FYI its 12:35 AM EST and some parts are still down. Reseting password screen doesn’t work for example.

segmondy 3 months ago

One day one of these services will go down and they won't be able to bring it back up.

phil248 3 months ago

Oh, that explains why I felt the world become slightly less hateful all of a sudden.

heisnotanalien 3 months ago

Good. Let's hope it stays down and the world will be a better place.

ejz 3 months ago

No! Now I have to work. :(

siriniok 3 months ago

I was afraid that they are rolling out their redesign, thanks God.

TremendousJudge 3 months ago

but where are we gonna go to get live status updates on the issue??

VectorLock 3 months ago

Its like people don't remember the days of the failwhale.

bookofjoe 3 months ago

Is down right now - Charlottesville Virginia

steverob 3 months ago

Where is twitter when I really need it? :D

frostyj 3 months ago

whats wrong with all giant companies and their 'internal configuration change' these days?

Nican 3 months ago

Oddly enough- The Mastadon (Open-source decentralized Twitter clone) instance that I use is also down for maintenance.

elcapitan 3 months ago

Seems to be partially back.

omarforgotpwd 3 months ago

they should have thrown up the fail whale for old times sake

pulkitsh1234 3 months ago

ahh..another "configuration change" ?

WheelsAtLarge 3 months ago

Good, let's hope it stays down. If only we were so lucky.

DannyB2 3 months ago

Spock: The loss to the galaxy may be irretrievable.

slackfan 3 months ago

Here's hoping it stays that way!

cgy1 3 months ago

I'm sure it's purely coincidence, but interestingly Trump's also holding his Social Media Summit right now.

  • Balgair 3 months ago

    I had the exact same thought!

gigatexal 3 months ago

Works fine for me. Lucky i guess.

ga-vu 3 months ago

We're all gonna die!

aphextim 3 months ago

Call me a conspiracy theorist but seems odd the timing of this at the same time Trump is having his "Twitter Summit".

Then again things have been going down over the past two weeks so it's probably just coincidence.

kyledrake 3 months ago

In a slightly better alternate universe it stays down.

  • 0xFFFE 3 months ago

    I share your sentiment to a certain extent, but I believe Twitter is a necessary evil. There should be an alternative to main stream media for people.

    • azernik 3 months ago

      It's not so much an alternative to mainstream media as an informal gathering place for workers in the mainstream media

      • busterarm 3 months ago

        It became the formal gathering place once Twitter bent the knee and started handing out blue checkmarks like candy to every low-level media personality basically on-demand.

        At some media companies, getting you your blue checkmark is part of the HR on-boarding process, ffs.

        Journalists are clearly the heaviest, most important Twitter users at this point.

    • squarefoot 3 months ago

      Sadly, for most people Twitter is mainstream media.

    • eqdw 3 months ago

      That alternative existed before Twitter, and will continue to exist after Twitter

    • papito 3 months ago

      I follow mostly "main stream media" on Twitter. It should be an aggregator of professional news, not "alternative" facts.

      • SllX 3 months ago

        Twitter is where the professionalism in the mainstream media goes to die. Between headlines you’re likely either getting insider baseball, a circlejerk or a crowbar in the face with “mainstream” written on the side in sharpie. If all you have are the headlines, then you’ve chosen Twitter to be your alternative “RSS” client. A valid choice for most and not one I would criticize either, but not entirely what it is or “should” be.

    • rvz 3 months ago

      I found Gab [0] to be quite a viable alternative to Twitter due to its decentralised nature thanks to Mastodon so that downtimes like this cannot happen on the platform.

      [0] - https://gab.ai

      • hlieberman 3 months ago

        Impressive; it's been quite a while since a comment made me roll my eyes so hard I got a headache.

      • kgraves 3 months ago

        > ...quite a viable alternative to Twitter due to its decentralised thanks to Mastodon.

        Gab isn't what everyone makes it out to be, they were told to build their own alternative and built it. It's even better that it's using Mastodon's stack to build their own decentralised social network.

        Gab, Mastodon and others like it are future of social networks. Centralisation is evil.

      • barneygumble742 3 months ago

        Is this the neo-nazi Twitter alternative?

        • kgraves 3 months ago

          As much of my disgust of these people, I can find you literal Nazi's that are still on Twitter.

      • kyledrake 3 months ago

        I would take an extremely centralized Twitter to a vaguely federated neo-nazi platform a billion times over.

  • saidajigumi 3 months ago

    Which reminds me of a related quip of mine: "Like Fight Club, but for Facebook."

miramardesign 3 months ago

those leftist cucks probably pay their server fees by whoring out their wives.

golemiprague 3 months ago

Good, hope they stay like that. They blocked me twice because of political reasons and I had to contest it in order for them to remove the block. This company think they are the are some culture commissar in communist Russia, I hope one day they will end like those people in goulag due to some wrong thinking or something.

cagrimmett 3 months ago

I hope Jack went rogue and nuked it after his most recent hot yoga vision.

That's the kind of shakeup we need in this world.

  • sp332 3 months ago

    They have been teasing/warning about a new version of Twitter that was due to launch soon. I wonder if this is it.

  • krferriter 3 months ago

    It's a bold strategy, I'd enjoy seeing how that would play out.

  • anirudh24seven 3 months ago

    Is that a reference to the Black Mirror episode?

Kyragem 3 months ago

Wow I hope the stock markets won't crash with Trump having no ability to talk it up constantly.

cronix 3 months ago

Stay down. The world will be a better place.

znpy 3 months ago

meh.

RIMR 3 months ago

And here I am thinking I got suspended again because right-wingers report every comment they don't like.

  • idlewords 3 months ago

    Maybe we all got suspended

  • brooksgarrett 3 months ago

    And here I am thinking I got suspended again because ~right-wingers~ lame people report every comment they don't like.

    FTFY. In my experience both sides of politics have weak minded people that can't experience critical thinking.

    ETA: Awesome. The downvotes and "flagged" status are great to support my statement. Thanks for the fish!

Bob995 3 months ago

Testing if I'm already censored by the mods.

simplecomplex 3 months ago

Oh no! How will we participate in the Two Minutes Hate!?

Get this to the top of the page! Everybody! TWITTER IS DOWN!!!!!!!

yomoraka 3 months ago

Someone from "youknowwho" administration had made a deployment it appears

ankushnarula 3 months ago

This is the one site outage that might actually be a good thing for the world.

musgrove 3 months ago

I wondered why there was a sudden peace on Earth.