In partnership with

The shape of last week’s Pentagon drama is much clearer now, and it is not flattering to Sam Altman. What looked at first like a fast-moving policy disagreement now reads like something more calculated: Anthropic took the public punishment for refusing looser terms on domestic surveillance and autonomous weapons, while OpenAI moved into the opening almost immediately and tried to present that move as a principled act of de-escalation. Over the last seven days, the facts have moved in one direction: the original OpenAI deal left key ambiguities in place, OpenAI had to tighten it after backlash, and the backlash itself turned out to be broad enough to reach consumers, employees, and finally OpenAI’s own leadership ranks.

That is why Dario Amodei’s leaked memo hit so hard. Anthropic had refused the Pentagon’s insistence on “all lawful use” because it wanted two exclusions kept intact: no mass domestic surveillance of Americans and no fully autonomous weapons. Trump then ordered federal agencies to phase out Anthropic, the Pentagon moved to brand it a supply-chain risk, and hours later OpenAI announced a classified-network deal of its own. The sequence did not make OpenAI look like a neutral stabilizer. It made OpenAI look like the company that waited for Anthropic to absorb the blow and then stepped over the body.

What do these names have in common?

  • Arnold Schwarzenegger

  • Codie Sanchez

  • Scott Galloway

  • Colin & Samir

  • Shaan Puri

  • Jay Shetty

They all run their businesses on beehiiv. Newsletters, websites, digital products, and more. beehiiv is the only platform you need to take your content business to the next level.

🚨Limited time offer: Get 30% off your first 3 months on beehiiv. Just use code JOIN30 at checkout.

The sneakiest part was the pose of solidarity

The most slippery element of Altman’s move was not that he took the Pentagon deal. It was the way he talked about taking it. In a memo to staff, Altman said OpenAI shared Anthropic’s core red lines, no mass surveillance and no autonomous lethal weapons, and framed the issue as one for the whole industry. He said he hoped to “de-escalate things.” Yet at the same time, OpenAI was still actively trying to land a Pentagon deal for classified use, and then announced an agreement that would effectively replace Anthropic in classified military environments.

That is what made the move feel sneaky rather than merely competitive. OpenAI tried to collect the moral credit of solidarity while also collecting the commercial upside of Anthropic’s exclusion. Its later public post doubled down on that balancing act. OpenAI said Anthropic should not be designated a supply-chain risk, but it also argued that its own agreement had “more guardrails” than any earlier classified deployment, including Anthropic’s, and suggested that other labs had reduced or removed guardrails. Put differently, OpenAI was trying to play peacemaker, replacement vendor, and safety champion all at once.

The first version of the deal left the key gap open

The strongest evidence that Anthropic’s suspicions were not paranoia is that the first public version of OpenAI’s Pentagon deal did not clearly close the very surveillance loophole Anthropic had been warning about. The new OpenAI agreement did not explicitly prohibit the collection of Americans’ publicly available information, which Anthropic had treated as crucial because legally purchased geolocation, web browsing, and financial data can still be assembled into a machinery of mass surveillance when AI is layered on top. Altman himself was directly asked whether he feared a later fight with the Pentagon over what counts as lawful, and he answered yes.

OpenAI’s own original write-up supports that criticism. Before the later update, the public contract language it highlighted still centered on “all lawful purposes,” with limits tied to existing law, current policy, and monitoring of private information. That is exactly why critics objected. The company’s public case rested on a confident rhetorical claim that the red lines were secure, even though the publicly described wording still leaned on legal categories and policy frameworks that Anthropic had already argued were too elastic. The problem was not that Anthropic misunderstood the deal. The problem was that the deal, as first presented, really was less clear than Altman’s spin suggested.

OpenAI had to fix the contract after the applause

Once the backlash landed, OpenAI had to go back and repair the agreement. Altman later said OpenAI was working with the Department of War to add language making its principles clearer, including that OpenAI’s services would not be used by Pentagon intelligence agencies like the NSA without a follow-on agreement.

The revised language matters because it proves the first announcement was not good enough. OpenAI later added an explicit prohibition on domestic surveillance of U.S. persons through commercially acquired personal or identifiable information, and reiterated the NSA exclusion. That is much more specific than the earlier formulation. It is also why Altman admitted that OpenAI should not have rushed the announcement and that the deal looked “opportunistic and sloppy.” A deal that has to be clarified after the victory lap was not a settled ethical triumph. It was a rushed public relations maneuver that had to be patched once people started reading the fine print.

Amodei’s memo was explosive because it named the play

That is the backdrop for Amodei’s internal note. According to reports, he described OpenAI’s messaging as “mendacious” and “safety theater,” and characterized many of Altman’s public statements as “straight up lies” and “gaslighting.” He also reportedly wrote that the main reason OpenAI accepted the deal and Anthropic did not was that OpenAI cared more about placating employees while Anthropic cared more about preventing abuses. Those are unusually sharp words from one frontier lab CEO about another, but they were explosive precisely because the subsequent week made them look less like a tantrum and more like an ugly reading of the facts.

Amodei has since apologized for the tone of that memo, and Anthropic later said the post was written within hours of Trump’s phaseout order, Hegseth’s supply-chain-risk announcement, and OpenAI’s deal announcement, and did not reflect his “careful or considered views.” But Anthropic did not retreat from the substance. It still says its only two exceptions are mass domestic surveillance and fully autonomous weapons, and it is still challenging the supply-chain-risk designation in court. The apology trimmed the rhetoric. It did not erase the core accusation that OpenAI used the chaos of Anthropic’s punishment to present an inferior, or at least unsettled, compromise as if it were a principled resolution.

The backlash was immediate, measurable, and unusually severe

The backlash against OpenAI was not just a few angry posts. ChatGPT app uninstalls in the United States surged 295 percent day over day immediately after news of the Pentagon deal. The average daily uninstall rate also jumped sharply relative to the prior month, one-star reviews spiked, and five-star reviews dropped dramatically over the same weekend. Those are not normal fluctuations for the leading consumer AI app. They are signs of a real reputational rupture.

Just as important, the backlash translated into movement toward Anthropic. Claude briefly outpaced ChatGPT in U.S. downloads for the first time. Anthropic’s growth accelerated, Claude shot to the top of app rankings, and the company saw a wave of new users. The public did not read these events as a neutral industry shuffle. A meaningful slice of users read them as Anthropic refusing and OpenAI folding.

The backlash spread from consumers to workers

The public blowback was matched by backlash inside the industry. Employees from Google and OpenAI signed an open letter expressing solidarity with Anthropic’s red lines and urging their own leadership to stand together against Pentagon pressure. The letter explicitly warned that the government was trying to divide companies by making each fear the others would cave first. In other words, workers saw the move exactly as Anthropic supporters did: not as a clean ethical settlement, but as a divide-and-conquer play that only works if one lab breaks ranks.

That backlash eventually reached OpenAI itself in a more damaging form. A senior OpenAI leader resigned over the Pentagon deal, saying the company did not take enough time before agreeing to deploy its models on classified cloud networks, that surveillance of Americans without judicial oversight and lethal autonomy without human authorization deserved more deliberation than they got, and that the deal had been announced without the guardrails being fully defined. That is the kind of internal critique that cuts through spin, because it came from inside the company and went directly to the heart of Anthropic’s complaint.

The larger pattern became even harder to deny

As more details came out, OpenAI’s role looked less like an elegant compromise and more like acceptance of the government’s underlying framework. Pentagon officials made clear that the real clash with Anthropic centered on autonomous warfare, including future missile-defense and drone-swarm use cases, and on Anthropic’s refusal to let the Pentagon bulk-collect public information on people using its AI. At the same time, officials said that Google, OpenAI, and xAI accepted the Pentagon’s “all lawful use” framework. That undercuts the idea that OpenAI found some fundamentally different moral architecture. The broader architecture stayed the same. OpenAI simply accepted it.

The government also moved toward broader procurement rules that would require companies seeking federal AI business to grant irrevocable rights for all legal uses of their systems, while Anthropic’s federal purchasing arrangements were being terminated. That is the deeper consequence of the swoop. OpenAI’s deal did not de-escalate the state’s demand for open-ended discretion. It helped normalize it. Anthropic took the punishment for saying no, and OpenAI’s response made it easier for Washington to argue that reasonable companies accept the premise and only haggle over details. That is why the move still looks so sneaky. It was not just fast. It was fast in a way that turned another company’s principled refusal into cover for a deal that had to be fixed after the fact.

Keep reading