In 2023, it was easy to read Amazon’s Anthropic bet as a scramble. Microsoft had OpenAI, Azure was marketed as OpenAI’s “exclusive cloud provider,” and ChatGPT had already flooded the consumer psyche with the kind of brand gravity you can’t brute-force with a few AWS re:Invent keynotes.

Fast-forward to late 2025, and the story looks wildly different. Anthropic is no longer a “safe but smaller” OpenAI offshoot. It is increasingly the model provider that enterprises actually standardize on, especially for coding and agentic workflows. And because Amazon did not just invest cash but also wrapped Anthropic in custom silicon, purpose-built clusters, and distribution through Bedrock, AWS now looks less like the cloud that missed the first wave and more like the cloud that is about to monetize the most profitable part of the second wave.

When the Microsoft and OpenAI Alliance Looked Unbeatable

For a stretch, the generative AI landscape seemed to have a default setting: OpenAI made the frontier models, and Microsoft got the cloud spoils. Microsoft’s own announcements emphasized how tightly coupled the partnership was, with Azure powering OpenAI workloads and the relationship framed as a structural advantage rather than a casual collaboration.

That mattered because enterprise AI is not won by vibes. It is won by procurement processes, compliance checklists, and the gravitational pull of where workloads already live. If the best models and the easiest enterprise path both pointed toward Azure, AWS risked being trapped in the awkward position of “largest cloud, second-choice AI stack.” So when Amazon committed up to $4 billion to Anthropic in 2023, with $1.25 billion up front and room for more, it was easy to view that move as Amazon trying to buy a ticket into someone else’s party.

The “Coding Niche” That Turned Out to Be the Prize

Anthropic did not win the early consumer mindshare battle. ChatGPT became a verb. Claude became a tab you opened only if you were already deep in the AI space. That apparent limitation pushed Anthropic toward what turned out to be the most leverage-rich wedge in enterprise AI: code.

As market data from 2025 started to roll in, a different picture emerged. Surveys of production usage showed Anthropic overtaking OpenAI in enterprise model adoption, with Anthropic’s share pulled up by coding and agentic workflows. In those same views, Claude was increasingly described as the top developer choice for code generation, with a materially higher share than competitors in the data that enterprises actually reported.

Once coding becomes the beachhead, it stops being a niche. Coding sits adjacent to almost every reason enterprises buy AI next: IT automation, internal tools, analytics pipelines, security triage, customer support flows, and the agentic systems that weave all of those workflows together. The moment a model becomes the default in developer workflows, it becomes the default substrate for enterprise automation. That is where budgets get sticky and where long-term platform value accumulates.

Opus 4.5 and Why Benchmark Wins Suddenly Matter Again

Anthropic’s late 2025 launch of Claude Opus 4.5 changed the tone of the conversation. Anthropic did not present it as a model that was “a bit better.” It described Opus 4.5 as the best model in the world for coding, agents, and computer use and highlighted state-of-the-art results on real-world software engineering tests such as SWE-bench Verified.

SWE-bench is a useful shorthand because it is not a toy “write a function” exam. It is built around real GitHub issues and evaluates solutions by whether they actually pass the relevant tests. No single benchmark should be treated as gospel, but the direction is hard to ignore. Public leaderboards show Claude 4.5 Opus at the top end of reported resolutions on several tracked settings, especially at higher “effort” levels where serious software work happens.

What is especially telling is that Anthropic stresses efficiency, not only raw capability. The company highlights cases where Opus 4.5 matches or beats other Claude variants while using significantly fewer tokens at comparable effort settings. That matters in enterprise contexts because “best model” is only half of the purchase decision. The other half is “best model at a cost curve that the CFO can tolerate.”

Amazon Did Not Just Invest, It Built the Factory

The part most simplistic “Amazon invested in Anthropic” narratives miss is that Amazon treated Anthropic less like a portfolio company and more like a flagship workload to optimize the entire AWS stack around.

By November 2024, Amazon’s total committed investment had reached $8 billion, and Anthropic had formally named AWS its primary cloud and training partner. The companies described deep technical collaboration with AWS’s Annapurna Labs team on future generations of Trainium accelerators and contributions to the Neuron software stack. This was not a simple arrangement where Anthropic rented chips. It was a co-development of the training platform itself.

Then came Project Rainier. Amazon and outside reporting described it as an enormous AI compute cluster built around nearly half a million Trainium2 chips, with Anthropic expected to scale to more than one million Trainium2 chips across AWS by the end of 2025. That is not mere “partner support.” It is Amazon constructing a dedicated industrial base for the future of a single model family.

Amazon’s Project Rainier, purpose-built for Anthropic

The Payoff: AWS Gets the Enterprise Gravity, and Amazon Gets the Upside

If you want a simple scoreboard, look at where enterprises are actually planting flags. Market updates now routinely describe Anthropic as the leading player in enterprise usage and tie that rise directly to coding and agentic workflows. Those are exactly the kinds of workloads that expand into broader cloud spend over time, because once a company standardizes on a model for its code and agents, that decision quietly dictates where a lot of its infrastructure will live.

The financial-world version of “this bet worked” is that Anthropic’s valuation surge began to show up on Amazon’s financial statements. As Anthropic raised money at ever-higher valuations, Amazon recorded significant paper gains tied to its stake, which in turn boosted quarterly profit. The equity line in Amazon’s income statement, once a rounding error, suddenly started to reflect real value from this one relationship.

The strategic kicker is that even if Anthropic is available across clouds, AWS can still win disproportionate value by being the best place to run Claude at scale. Price-performance advantages from Trainium, tight integration through Bedrock, and a home-field edge on infrastructure upgrades that ship in lockstep with Claude’s roadmap all work together. That is how you convert “we have access to this model” into “this is the platform where this model actually shines.”

The Cloud Endgame Amazon Is Actually Playing

It is tempting to frame everything as a model horserace: Claude versus GPT versus Gemini. Amazon’s real win, however, is that it has positioned AWS as the default substrate for enterprise AI work. The more agentic and code-heavy AI becomes, the more it looks like cloud-native software engineering at massive scale. AWS already excels at being the place where enterprises run cloud-native software engineering at massive scale.

There are still real uncertainties. Anthropic itself is explicit that Opus 4.5 is available on all three major cloud platforms, which is a reminder that model companies prefer not to be boxed into a single kingmaker. The broader AI infrastructure market is so capital intensive that everyone is building everywhere, and Anthropic has announced huge data center ambitions with partners beyond Amazon.

That is exactly why Amazon’s Anthropic jackpot matters. Amazon is not betting on exclusivity. It is betting on alignment. Anthropic’s frontier trajectory is increasingly aligned with the highest-margin enterprise workloads, and Amazon has built a bespoke runway of capital, chips, and clusters so that, when Claude gets better, AWS benefits.

In hindsight, the hail mary was not that Amazon chose Anthropic. The real swing was that Amazon chose to treat Anthropic like a first-party platform priority before the rest of the market realized that enterprise coding agents, rather than consumer chat, would be where the durable money is.

Keep reading