crazygringo 4 minutes ago

I was going to say that I was surprised that enough "normal" users had heard about the Pentagon news story that it would make a difference.

Then I remembered that the app store rankings [1] seem to be based on activity from just the past day or so.

And so a lot of "plugged-in" users switching to Claude all at once then would be enough to briefly send Claude to #1, since the migration would be sizeable in comparison to the normal daily download baseline.

But we can also expect that this would probably be just a blip for a couple of days, as it's unlikely to make much difference in the baseline ratio for the general population.

[1] https://apps.apple.com/us/iphone/charts

dmix 14 minutes ago

I cancelled ChatGPT and downloaded Claude mobile 1 day before the controversy because I decided to go all in on Claude Code. The chat app isn't quite as good as ChatGPT, feels like a year behind in integration with stuff like web search and retrieval. But it does the job.

evan_ 10 minutes ago

It’s irritating to me that on iOS at least I can’t be logged in with both my work AND home Claude instances in the native app. I instead have to log in to the app with one and use a desktop shortcut to claude.ai for the other.

  • brokencode 3 minutes ago

    Yeah, being able to switch quickly between amounts is my number one request for the iOS app right now. Logging in on the browser is a good idea though.

jazzyjackson 17 minutes ago

So dumb, I have to think it's astroturfed. Anthropic states they're not willing to help with autonomous weapons because it's not good enough yet, and suddenly everyone cancels their OAI subscription? I just don't get it

  • softwaredoug 6 minutes ago

    I think it’s from a perception (right or wrong) that OpenAI engineered this kerfuffle to hurt a competitor and cozy up to the administration

  • evan_ 12 minutes ago

    Most people’s LLM use cases don’t involve autonomous weapons

    • dmix 6 minutes ago

      Do any autonomous weapons have an LLM usecase yet? As opposed to say, specialized visual ML stuff that can fit on a small portable system.

      The DoD office staff using it for information analysis, similar to Palantir providing data integration software, isn't quite the same as using it for weapons.

      • charcircuit a minute ago

        >As opposed to say, specialized visual ML stuff that can fit on a small portable system.

        What do you think writes the code for that and trains the models?

  • foolfoolz 10 minutes ago

    “everyone” could be one person. these internet reported movements rarely have meaningful impact

  • giancarlostoro 11 minutes ago

    Some might be but dont be so sure. Theres a lot of hatred in our country over politics and sides dont matter, all sides can be quite hateful.

  • SpicyLemonZest 7 minutes ago

    I can't prove there was no astroturfing, but I was personally absolutely furious and hard-deleted my account Friday evening. I'm not sure how to bridge the gap in understanding, this story was extremely radicalizing for me, I'm genuinely hoping now that OpenAI burns to the ground.

  • smt88 14 minutes ago

    The sticking point seems to be that Anthropic didn't want to help the Trump regime spying on its own citizens (by using Claude to sift through mass-surveillance data)

canadiantim 3 minutes ago

I love Claude, but I’m personally rooting for Dick’s Sporting Goods to take the #1 spot

villgax 28 minutes ago

The sheer irony of admitting spying on the global population from both.

  • AmericanOP a minute ago

    The Patriot Act and spying on US citizens was a major issue a decade ago.

    The prevalence of data and a general sense of powerlessness means this won't get talked about much sadly.

  • bpodgursky 4 minutes ago

    The sheer naivete to think the United States has any legal or moral obligation to respect the privacy of the global population.

  • smt88 13 minutes ago

    When has the US govt and its allies been secretive about spying on non-citizens? We literally have agreements with European allies to collaborate on it.

  • api 19 minutes ago

    If it’s in the cloud it’s not your computer. I assume there is no privacy in the cloud unless it’s locally encrypted and only stored there.

k310 an hour ago

I read the terms regarding mass surveillance and autonomous weapons. Both OpenAI and Anthropic had similar holdbacks AFAICT. [0]

> The Pentagon has agreed to OpenAI's rules for deploying its technology safely in classified settings, though no contract has been signed, a source familiar with the talks tells Axios.

> Why it matters: The Pentagon has blasted OpenAI rival Anthropic for days, contending its red lines for AI use in the military -- mass surveillance and autonomous weapons -- are philosophical and "woke."

> Now, the department, which did not immediately respond to a request for comment, appears to have accepted OpenAI's similar conditions.

One big difference

OpenAI exec becomes top Trump donor with $25 million gift [1]

I guess that makes it "less woke". This is reprehensible political bullshit.

[0] https://archive.ph/9NcMf#selection-579.0-611.135

[1] https://www.sfgate.com/tech/article/brockman-openai-top-trum...

  • softwaredoug 5 minutes ago

    “ though no contract has been signed”

    Is doing a lot of work here

  • xvector 13 minutes ago

    No, OpenAI has consented to "all legal use" whereas Anthropic required the redlines to be in the ToS itself, regardless of what's legal.

    So for all intents and purposes the DoD can simply ignore OAI's redlines.

  • rvz 40 minutes ago

    This is hardly surprising given that OpenAI already signed a military contract. [0]

    Where was the open letter then? No-one cared.

    Recently, Claude was already used by the administration for the operation in Venezuela [1] alongside Palantir. Anthropic did nothing at the time and again...

    No-one cared.

    Now everyone cares when Anthropic finally said No? The decision for the contract was already predetermined for OpenAI, even with the open letter.

    So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?

    Either way, it seems that OpenAI and Anthropic were all OK with the US government using their models for warfare so really there is no point in defending both of them or even the employees who knew beforehand.

    [0] https://www.theguardian.com/technology/2025/jun/17/openai-mi...

    [1] https://www.theguardian.com/technology/2026/feb/14/us-milita...

    • sigmar 9 minutes ago

      >Now everyone cares when Anthropic finally said No?

      DoD started asking for the ability to do more stuff. That's the issue here. "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance... Fully autonomous weapons." https://www.anthropic.com/news/statement-department-of-war

      >So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?

      again, this story isn't about people that are against any military contract.

    • lotyrin 27 minutes ago

      How sure are we that something fishy isn't going on with the models and the alignment research teams and the answers the model is giving? Like maybe Claude's alignment made it worse at trying to mask as Allied Magacomputer than GPT and that's why they're up in arms?

dankwizard 10 minutes ago

I went the opposite and deleted Claude and replaced it with OpenAI - The money the US Military will bring in is going to elevate ChatGPT to levels the others cannot match

  • sigmar 7 minutes ago

    the claude contract was only 100M/year. about 0.7% of Claude's 14B revenue run rate. Not sure we know anything about the number for openai's new contract.