827a 2 days ago

> The plan, he writes, is for GitHub to completely move out of its own data centers in 24 months.

I find it interesting to compare timelines like this (which is very reasonable and expected for an organization of Github's size) with, for example, how AI 2027 describes the world will look like in October 2027.

In the next 24 months, if all these timeslines are to be believed, AI will have cured cancer, agent-5 will be plotting to kill all humans, leveraging all the data in a Global Central Memory Bank to subvert the internal corporate politics of all companies, governments, and militaries toward this goal (These are all real predictions AI 2027 makes); and Github will still be migrating workloads to Azure.

Maybe they should get agent-4 to help them.

  • ameliaquining 2 days ago

    This discrepancy is precisely the reason (or at least one reason) why AI 2027 hypothesizes that all the interesting developments will be happening inside whichever AI lab is in the lead. The kind of AI agent that AI 2027 hypothesizes in that timeframe could do the migration in much less than 24 months, but only if the organization completely changes how it works internally so that everything is driven by the goal of exploiting the AI's capabilities to the maximal extent. Microsoft/GitHub probably can't do that that quickly.

    • 827a 2 days ago

      You're certainly correct that they couldn't, though I don't feel stating "this is why all the interesting developments will be happening inside AI labs" is a fair review of the AI 2027 paper, as it makes many wide-ranging statements about how AI will impact the military, government, medical research, typical corpo-politics, software engineering, and, of course, AI research.

      The realization that you're close to making, and I hope I can help you make: If Microsoft & Github can't realize the benefits of AI that quickly; why should anyone believe that the rest of the world would be able to? After all, there are roughly zero "pre-AI" companies that are force-mutating their structure to adopt AI faster than Microsoft is [1].

      [1] https://www.theverge.com/tech/780946/microsoft-satya-nadella...

      • ameliaquining 2 days ago

        The idea in AI 2027 is that, if even one company can realize the benefits of AI that quickly, that's enough to change everything. Partly because of feedback loops where powerful AI is used to accelerate AI capabilities, and partly because we've already seen OpenAI's customer base go from zero to, like, everybody, in the blink of an eye. (This is not an attempt to weigh in on the ongoing controversy about OpenAI's financial sustainability; rather, the point is that we know that it's possible in this kind of scenario for a single company to attain economy-wide market penetration quickly, so that's not necessarily a big barrier to technological adoption.)

        • jononor a day ago

          Which companies would, if they alone become radically more effective, actually be able to radically change impact on their industry? The vast majority of companies are in value chains, where practical value creation depends also on the suppliers and customers of the company. Often there are multiple suppliers and customers, and sometimes there are multiple levels of relationships on each side... That places constrains/inefficiencies on how quickly and how much one can achieve.

        • jdlshore 2 days ago

          I’ve not read the paper, but it appears to be suffering the same fallacy that AI boosters tend to suffer: the mighty “if.”

          Yes, if AI is godlike, then the first company to leverage the machine god will be rewarded.

          But it’s not. The “benefits of AI” are a combination of placebo, automation of mediocre work, and few modest points of leverage.

    • blibble 2 days ago

      meanwhile openai is concentrating on making spongebob squarepants police chase videos

      • transcriptase 2 days ago

        That’s not fair.

        I also saw a creepy cat in the hat breaking into someone’s home and take a shotgun round to the chest.

      • ameliaquining 2 days ago

        How much do you think that meaningfully distracted them from general capabilities-acceleration work? I think not very much, so this doesn't seem like much of an argument.

        • blibble a day ago

          if you convincingly thought you were in a race to AGI you'd be spending every single resource on it to try and beat the competition

          not burning human effort and quintillions of CPU cycles on mickey mouse videos (literally)

  • phatfish 2 days ago

    Wow, that AI 2027 thing is some real dedicated OpenAI fan fiction.

    • dangus 2 days ago

      I really dislike it because some of my more doom and gloom-prone friends basically believe it as gospel.

      IIRC in the paper itself they back up their reputation using their previous predictions that only had a ~50% success rate.

      I also just don’t know why the paper needed to make up narrative stories as predictions instead of being more straightforward.

    • ameliaquining 2 days ago

      I mean, the OpenAI stand-in in AI 2027 is mostly not portrayed very positively.

  • vpShane 2 days ago

    We're in the Biff Tannen timeline I'm pretty sure. Things got sketch around 2012.

    So none of that is far fetched.

  • Neywiny 2 days ago

    The difference, as I'm sure you know, is that stocks don't care about azure migration. They care about delusions of grandeur

ryandvm 2 days ago

Obviously this makes sense from a dog-fooding perspective because the cloud provider (Microsoft) owns the product (Github), but I'm always surprised when very capable tech companies decide they aren't capable of running bare metal.

Running your own servers was never rocket science, it was literally the only option 20 years ago. Every startup used to have a rack of servers in a closet.

I have always thought of cloud hosting as something you do because you cannot afford a full-time ops team so it's wild to me that companies like Netflix decide that they literally don't have the operational expertise to manage servers.

  • dnadler 2 days ago

    It’s not that they don’t have it. It’s that they don’t want it.

JohnMakin 2 days ago

I've never done a migration at a scale like this, but I have seen infra at similar scale, and I can't imagine how difficult this will be in a 12 month period. How big are github's ops/dev teams? That seems like a really unrealistic target to me. I expect outages.

  • ndiddy 2 days ago

    > I expect outages.

    With Github's service record, that means there should be no observable difference between them doing the migration and them operating as usual.

  • trenchpilgrim 2 days ago

    When I was on infrastructure at Adobe, similar migrations took around 8-9 months (e.g. expanding into Azure, modernizing our datacenters, switching to Kubernetes).

    • jmull 2 days ago

      The quote says the effort is to move "completely".

      I think there will probably be a long tail which will prevent that from happening so quickly.

      (It also probably doesn't really matter... if their main goal is to scale using azure they really only need the stuff that will be scaling up to be there. They probably also want to be seen as eating their own dog food, which can reasonably be achieved without all of the long tail.)

  • zulban 2 days ago

    Maybe. Tho I would expect the devops practices and automation effectiveness of github internal projects to be far above your average shop.

  • tstrimple 20 hours ago

    In practice, I'd expect the majority of servers to go through a tool based lift and shift like Azure Migrate. That's what we're using to migrate around 6k state government servers to the cloud. Where there are opportunities for low hanging modernization, we'll take that route. Like migrating to SQL Managed Instances rather than pushing to SQL in a VM.

nikolay a day ago

Okay, I get it, but they also abandoned maintaining their Terraform provider! What a joke! They don't even want to open its maintenance and development to the community! How can you manage hundreds and thousands of repos manually?! It's always been a terrible provider - slow, buggy, and severely behind the latest GitHub features, but now it's literally dead! They openly claimed to be focusing their energy on API development, and until the API is fixed, they will continue working on the provider. This is unacceptable!

__turbobrew__ 2 days ago

Running your own physical infrastructure is hard, so it makes sense to me that github should benefit from the economies of scale of Azure. Given the biggest downside of running in a public cloud is cost, this is a non-issue for github as they will be vertically integrated with Azure and will receive infra at cost.

tracker1 2 days ago

I'm kind of neutral on this... It was more than expected since the MS acquisition and my biggest surprise is that both it didn't come sooner and that they're making the relatively sane choice to clearly prioritize getting the environment shifted instead of juggling multiple "priority" projects and features.

gdulli 2 days ago

This is good news right? People complain about them having bloated it with too many features. If this keeps them from making it even more of an ridiculous AI editor rather than something that complements an editor, that would be great.

1una 2 days ago

Does this mean GitHub will finally support IPv6?

  • jiggawatts 2 days ago

    Azure would have to fix their IPv6 support first to not be “mostly broken” or alternatively “exists only to tick a compliance checkbox.”

blibble 2 days ago

> It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.

yes, the addition of un-disableable "AI" features made me spend a large amount of time and effort moving every single one of my projects off GitHub

nitwit005 2 days ago

I essentially don't want any more GitHub features, so this sounds totally positive.

  • avtar 2 days ago

    It would be great if Actions received some polish. How are people applying org/team wide policies such as job timeouts to avoid burning through monthly limits? Last I checked this wasn’t a feature :/

    • mhitza 2 days ago

      You can draft up your own json schema and apply it as a validation check when PRs are created. Or just use a proper programming language for the yaml checks.

      You can pair that up with a template repository for the org and everyone can start new projects from that template (which contains the workflow to check future workflow files).

      If you want assistance with organisation-wide code policies, I can help and my email is in my profile.

      • avtar 2 days ago

        Templates and config checks are what I ended up with but it would be nice if this was enforceable out of the box. This was just one example that came to mind where the experience feels rough requiring workarounds https://github.com/orgs/community/discussions/14834 Perhaps it's case of grass being greener elsewhere and my patchy memory but I'm almost missing Jenkins...

jasonthorsness 2 days ago

It makes sense given "CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center." and Azure has a decent setup for their AI support infrastructure and base virtual machines.

But having gone through a data center migration; depending on how "unique" some of their existing setup was; I do not envy them in this process (and I estimate this will take double the expected time :P).

  • sam_lowry_ 2 days ago

    Vladimir Fedorov just joined Github from Meta a few months ago. What does he know?

    • phatfish 2 days ago

      At the very least it seems like someone had told him "we are constrained on capacity in our Virginia data center".

aaronbrethorst 2 days ago

While GitHub had previously started work on migrating parts of its service to Azure, our understanding is that these migrations have been halting and sometimes failed

And there's no reason to suspect this next batch of migrations will be any different. Telling your engineers, 'good luck, you get to spend the next 18 months treading water,' is a terrible way to get them to give their best or even stick around.

  • rufo 2 days ago

    I think sometimes the migrations were halted more because MSFT wanted to hold off. Microsoft makes more money selling Azure outside the company, and they needed more power for GPU build-out once LLMs and AI started becoming one of Microsoft's Things™.

    That said, the difficulty of the work was absolutely also a factor in deciding not to carry through with earlier migrations, so your point still stands as a whole IMO. Just, now solutions will be found for blockers and engineers will be kept on it, rather than efforts stalling out and being put on hold.

mgdev 2 days ago

This is, as they say, "The beginning of the end."

  • tyleo 2 days ago

    Beginning of the end of what? If I could have take a bet, “Will GitHub move to Azure?” a few years ago, I would have thrown money down.

    This seems inevitable since the acquisition and not necessarily a bad thing. I see it as neutral.

    • tacker2000 2 days ago

      The point is that they are prioritizing this over new features.

      But since “new features” consists primarily of shoving the bloody copilot agent down everyones throat, it might not be such a bad thing.

      • dmix 2 days ago

        That plus the new React diff viewer in beta. The old one seemed to be a simpler Web Component inside a Rails turbo frame.

        I've tested the beta one and like most SPAs it doesn't scale well to large amounts of data (large numbers of files / line counts). You can feel the DOM slowing down even on a high end macbook. It even blanked out the page a couple times, another common issue when browsers are overloaded. So I switched back to the old one.

        • dmart 2 days ago

          The new one also doesn’t consistently snap to a specific line in the URL fragment if the diff is too large, which makes sharing links problematic.

      • torgoguys 2 days ago

        >The point is that they are prioritizing this over new features.

        Good! Shoring up infrastructure vs. delivering the latest hotness is something that is rarely prioritized. I'll take boring and reliable every day of the week.

        • tacker2000 2 days ago

          Fair point, but I believe they are just migrating for the sake of pleasing their MS overlords.

          Does anyone know what infra they are running on now? AWS?

      • dbbk 2 days ago

        You would be a fool to think the Copilot Coding Agent is not their most important feature at the moment. It's not particularly great, but it must become so.

    • walkabout 2 days ago

      The infrastructure behind serving git repos the way they do is pretty fiddly—I'd not be a bit surprised if this move reduces stability and/or performance.

      • stackskipton 2 days ago

        Sure but it also might make them fix some of that.

        • walkabout 2 days ago

          No, I mean inherently so. It's basically a whole stack of caching problems.

  • driverdan 2 days ago

    That started with MS and accelerated with Copilot. Word is that GH leadership doesn't care about anything other than Copilot/AI. All other features are receiving far less focus and fewer resources. I've heard this repeatedly from current and former employees.

  • aaronbrethorst 2 days ago

    nah, I'd say we're well past that. The beginning might have been Microsoft's acquisition of GitHub. Or the elimination of GitHub's independence.

    • rufo 2 days ago

      IMHO: the acceleration curve into point-of-no-return was when Microsoft decided to go hard on AI, and saw GitHub's Copilot as one of the key inflection points they were going to use to do so - even going so far to adopt the Copilot brand across the entire company.

      Before that, it still felt like there _some_ degree of autonomy and ability to think about the developer experience on the platform as a whole. Once ChatGPT took off and MSFT decided that they were going to go hard on AI, though, Copilot (and therefore GitHub) became too important to Microsoft to leave alone.

      I kinda suspect the slide was inevitable anyway, given how acquisitions tend to go. But IMO, Copilot was the tsunami that washed the octocat out to sea.

  • bediger4000 2 days ago

    It does remind the oldsters of Hotmail.com

iamleppert 2 days ago

I hope they are prepared for lots of headaches, random outages, slow (did I say S-L-O-W) tooling and infrastructure, terrible access to GPU's, at least 2-3x more expensive than any other cloud. Support is staffed by overseas Indians who drag every interaction out and just wear you down until you give up.

  • mindcrash a day ago

    > Support is staffed by overseas Indians who drag every interaction out and just wear you down until you give up

    You really think they, not unlike most top of the crop MSFT partners, get support out of India?

    Try directly from the teams in Redmond.

lousken 2 days ago

will they finally enable SSO with Entra for everyone after that?