
Davos was the tell: Anthropic is treating GPUs as geopolitics
At Davos this week, Anthropic CEO Dario Amodei attacked the idea of shipping advanced AI chips to China, calling it a “big mistake,” “crazy,” and likening it to “selling nuclear weapons to North Korea.” Taste aside, the analogy reveals how a frontier CEO now frames compute. It is no longer a cheap commodity input, but a strategic asset requiring political constraint.
From that vantage point, Anthropic’s hardware strategy stops looking like cost optimization and starts looking like political economy. Anthropic has backed proposed U.S. legislation to further restrict Nvidia’s exports to China, pushing for domestic prioritization that conflicts with Nvidia’s instinct to sell wherever policy allows. This is ideological positioning with teeth. It implies Anthropic is comfortable advocating constraints on the very vendor most frontier labs still rely on.
Amodei’s Davos comments are evidence, not rhetoric. Taking a hard line against the globalization of frontier compute is easier when your company is building credible exit ramps away from Nvidia’s stack and into hyperscaler-owned supply chains. These chains reduce Nvidia dependence, even if they do not eliminate it entirely.
The AI wars have split into two camps: compute sovereigns and GPU tenants
The frontier is splitting. Google and Anthropic are building insulation from Nvidia’s supply and pricing power. Google is vertically integrated through its TPU stack. It has explicitly stated Gemini was trained and served on TPUs, even as Nvidia GPUs continue to play a role in its broader infrastructure.
Anthropic is not vertically integrated, but it is embedding itself inside Amazon’s and Google’s ecosystems to gain real hardware optionality at scale. Nvidia remains part of the mix, but it is no longer a single point of failure.
In contrast, OpenAI and xAI are increasingly defined by near total Nvidia dependence. OpenAI has tied its next generation infrastructure to Nvidia systems at extraordinary scale, including a letter of intent for at least 10 gigawatts of capacity. xAI’s narrative is similarly GPU centric, anchored in expanding its Colossus supercomputer toward one million GPUs.
This is not about who has the best model this quarter. It is about who controls the bottleneck that determines whether the next frontier system launches on schedule.
Why Nvidia dependence is a compounding trap
The popular narrative suggests Nvidia dependence is temporary. Supply expands, everyone gets enough GPUs, and the constraint fades. That misreads the dynamic. As soon as capacity loosens, frontier labs simply increase ambition. The demand curve expands to fill the supply curve because scaling is the product roadmap.
The trap compounds because dependence is not just hardware. It is software assumptions, tooling habits, and organizational muscle memory. Even for Google and Anthropic, reducing Nvidia exposure is a multi year effort, not a simple switch. For OpenAI and xAI, whose stacks are overwhelmingly Nvidia native, the switching costs are far higher. OpenAI’s own efforts to build custom silicon illustrate this reality. If one of the most well capitalized labs in the world needs years to create leverage, then “we’ll just switch away later” is not a strategy. It is denial.
Davos complicates this further. With a frontier CEO arguing that advanced chip shipments should be constrained on national security grounds, compute availability is no longer a simple procurement variable. It is a contested political domain. In a contested domain, partial independence is a decisive advantage.
Amodei is building a values aligned compute stack
If Amodei believes frontier compute must be controlled and that letting chips flow to China is reckless, then heavy dependence on Nvidia becomes misaligned, not just risky. Nvidia is a publicly traded vendor incentivized to maximize shipments. Anthropic’s incentives, articulated at Davos, are to tighten those boundaries.
That mismatch explains why Anthropic’s strategy looks like deliberate decoupling. Anthropic announced an expanded partnership with AWS, establishing it as a primary cloud partner alongside a deep commitment to Trainium. Amazon’s Project Rainier is powered by massive volumes of Trainium2 chips, with Anthropic expected to use over one million of them. Nvidia GPUs are still part of the story, but they are no longer the only path forward.
Anthropic then doubled down on Google, announcing plans to access up to one million TPUs, bringing well over a gigawatt of capacity online. The combination is the point. Amazon’s silicon plus Google’s silicon equals genuine bargaining power.
This makes the Davos posture practical rather than theatrical. Anthropic can credibly support export restrictions that constrain Nvidia’s global supply because it is systematically reducing its own reliance on that supply. Ideology and practicality reinforce each other, making the move away from Nvidia durable.
OpenAI is paying the Nvidia tax in public
OpenAI remains an extraordinary company, but its Nvidia dependence is leaking into product reality. When rolling out GPT 4.5, Sam Altman described it as a “giant, expensive model” and acknowledged GPU constraints limited the rollout. When the market leader has to stagger flagship releases due to shortages, the bottleneck is real.
The company’s strategic responses reinforce this exposure. OpenAI has framed a deep partnership with Nvidia contemplating 10 gigawatts of systems, a plan that deepens rather than escapes the relationship. Meanwhile, the economics of scaling are forcing new monetization pathways, including testing advertising in lower tier ChatGPT products.
Even the escape plans are slow. OpenAI’s push toward custom silicon carries timelines measured in years. That gap matters. OpenAI is competing in the present while trying to build leverage on a future schedule.
xAI is building a GPU empire, but demand is borrowed
xAI’s strategy is to industrialize compute quickly, then use that scale to train Grok and position itself as a contender. Plans to expand the Memphis supercomputer toward one million GPUs reflect genuine ambition. The hardware effort is real, but hardware ambition does not guarantee platform pull.
Distribution reveals the weakness. Grok is enabled broadly through X premium subscriptions, relying on bundling rather than standalone adoption. This means a significant portion of usage is derivative of X’s subscription mechanics, not proof that Grok is winning as a default AI layer.
xAI also carries a competitive handicap in the form of governance and reputational drag. Regulatory pressure over nonconsensual image generation has already translated into investigations and temporary blocks. This is not a moral lecture. It is a growth tax.
The OpenAI vs xAI conflict is a gift to competitors
A second structural problem exists that is unrelated to model quality. OpenAI and xAI are consuming energy in a rivalry that is closer to zero sum than either admits. They are chasing overlapping segments of consumer chat and developer mindshare while competing for the same scarce Nvidia input.
The lawsuit turns that rivalry into a governance overhang, creating uncertainty around OpenAI’s structure. Even if OpenAI prevails, the distraction and narrative instability land directly on the company’s ability to present itself as a stable partner to enterprises.
While OpenAI and xAI fight loudly, Google and Anthropic compound quietly.
The quiet scoreboard shows momentum shifting
On consumer attention, ChatGPT remains dominant, but Google is closing ground. Gemini now sits firmly in the number two position in U.S. rankings, driven by distribution across Google’s existing surfaces. That trajectory aligns with Google’s structural advantage, namely a model trained and served primarily on its own infrastructure.
In enterprise, the shift is clearer. Multiple surveys suggest Anthropic has captured the largest share of enterprise LLM spend. Buyers reward reliability, governance, and predictability. Anthropic’s multi stack posture and safety forward brand map cleanly onto those incentives.
The investment implication: compute optionality is the moat
The contrarian view is not that OpenAI and xAI cannot build great models. It is that they are structurally positioned to lose degrees of freedom by scaling on the same constrained Nvidia substrate everyone else is chasing. Their cost curves and rollout schedules will increasingly be shaped by allocation realities they do not control.
Amodei’s Davos comments sharpen this dynamic. They suggest Anthropic is not only hedging Nvidia for price performance, but aligning its infrastructure with a worldview that treats frontier compute as a national advantage. The more Anthropic reduces reliance on Nvidia, the more comfortably it can support policies that constrain Nvidia’s flexibility. This makes the bottleneck even more painful for the labs that stayed tenants.
In the next phase of the AI wars, model quality still matters. But the winners will be the labs that scale with options, not the ones anchored to a single supplier.