
For a brief moment, OpenAI looked like it had grabbed every winning lottery ticket in artificial intelligence. ChatGPT defined the category, GPT-4 powered the most talked-about coding assistants, and competitors were either licensing OpenAI’s models or racing to clone them. In that window, OpenAI effectively sat at the center of every profitable AI vertical: chatting, coding, lightweight search, copywriting, slide decks, you name it.
What changed is not that OpenAI suddenly became bad. It is that everyone else woke up, specialized, and got fast. Google used OpenAI’s success as an existential alarm clock, poured its entire stack into Gemini, and now ships models that a growing number of power users and executives describe as ahead of OpenAI in multimodal reasoning and agents. Anthropic, forced to fight uphill without OpenAI’s brand or distribution, went hard into one niche that actually prints money: enterprise coding. On today’s benchmarks, Anthropic’s latest Opus models are beating both Gemini 3 Pro and GPT-5.1 on real software-engineering tasks.
If you buy that story, OpenAI is now in the worst possible spot for a once-dominant platform company. In broad consumer and enterprise chat, Gemini is catching or surpassing it. In coding, Claude Opus and Sonnet look strictly better for many serious teams. OpenAI is not going away, but the era of it owning the board is over.
OpenAI’s Vanishing Monopoly
When ChatGPT exploded in late 2022, OpenAI enjoyed something close to a monopoly on mindshare. For most people, “AI” simply meant “ChatGPT.” Early comparisons put alternatives far behind. OpenAI’s partnership with Microsoft pushed GPT models into GitHub Copilot, Office, Azure, and Windows, embedding them across both consumer and enterprise workflows.
Because serious rivals were not yet shipping at scale, OpenAI became the default backend for almost every profitable vertical. Want a chatbot in your app? Use OpenAI. Need to summarize PDFs or draft contracts? Use OpenAI. Want a coding assistant? Under the hood, tools like GitHub Copilot leaned heavily on OpenAI’s models. In effect, OpenAI briefly captured both the “horizontal” chat market and the “vertical” coding and productivity markets at the same time.
That kind of dominance carries a hidden curse. It encourages you to be everything for everyone instead of world-class at one hard thing. OpenAI kept shipping impressive general models, from GPT-4.1 to GPT-5, but reviews of GPT-5 for coding have been lukewarm. Developers praise its reasoning and planning, yet complain that it produces redundant code and underperforms Anthropic’s models on reproducible coding tasks. In other words, OpenAI doubled down on being a broad generalist just as the market began rewarding sharp specialists.
The Sleeping Giant Wakes: Google’s Gemini Blitz
OpenAI’s early success did something arguably more consequential than dominating a few years of hype. It humiliated Google. A company that had spent a decade bragging about its AI research suddenly looked flat-footed, caught defending a lackluster chatbot while YouTube and Search became showcases for a competitor’s tech.
Google’s response was not subtle. It unified its research teams under Google DeepMind, rolled out Gemini 1.5 with enormous context windows, then iterated to Gemini 2.0, 2.5 and now Gemini 3, each step leaning harder into multimodal reasoning and tight integration with Google’s ecosystem. Independent comparisons tend to frame GPT-5.1 and Gemini 3 Pro as trading blows: GPT-5.1 looks strong on adaptive reasoning and conversational nuance, while Gemini 3 Pro pushes the envelope in multimodality and agentic workflows.
The perception battle, however, is tilting Google’s way. In the last few days alone, high-profile users have gone on record saying Gemini 3 has “blown past” ChatGPT for their day-to-day work, specifically praising its reasoning, speed, and multimodal performance. At the same time, developer tools and cloud platforms increasingly treat Gemini as a first-class option next to, not beneath, OpenAI’s models.
Here is the problem for OpenAI: Google does not need to “win” the public chatbot race outright. It only needs to be good enough while leveraging its distribution. Gemini shows up in Android, Chrome, Gmail, Docs, and Google Cloud almost by default. When the underlying models are competitive or better, that distribution advantage becomes gravitational. Even if OpenAI’s next model is slightly stronger in some academic bench, Gemini will still be the thing a billion users already have.
Anthropic’s Coding Gambit
While OpenAI was trying to be the operating system of everything, Anthropic had to pick a lane. It did not have OpenAI’s name recognition or Microsoft’s distribution. For a while, even its headline model names sounded slightly obscure outside tech circles. That constraint forced focus.
Anthropic leaned into two overlapping bets: safety and deep capability for enterprise work. Over the last 18 months, that has cohered into a very specific strength: coding and “agentic” software engineering. Claude 3.5 Sonnet already raised the bar for coding ability in mid-2024, with Anthropic explicitly calling out large gains in programming and tool use. Claude 4, and especially Opus 4, then pushed hard on that identity, with Anthropic marketing Opus 4 as “the best coding model in the world” and backing it up on benchmarks like SWE-bench and Terminal-bench.
The recent Opus 4.5 release is where the story really crystallizes. On SWE-bench Verified, a benchmark built from real software-engineering tasks, Opus 4.5 clocks in around 80.9 percent accuracy, ahead of Gemini 3 Pro and OpenAI’s GPT-5.1 variants. Multiple analyses describe Opus 4.5 as the strongest general coding model available, and major tech outlets frame the Claude 4 family as redefining the state of the art in coding and long-running agent workflows.
Crucially, Anthropic has managed to convert that technical edge into the exact market the user thesis cares about: enterprise coding. Recent reports put Claude models at roughly 32 percent of enterprise LLM usage, overtaking OpenAI’s 25 percent share. Developer surveys show Claude gaining traction specifically among teams that care about high-quality code and tool use, not just chatty answers. For big companies that want fewer regressions and more reliable agentic behavior in complex codebases, Anthropic’s decision to specialize looks less like a constraint and more like a cheat code.
Why Coding Might Be The Richest Prize
It is tempting to think of “AI” as chatbots that answer trivia and write emails, but the real money sits where companies spend real budgets. Enterprise coding assistants and agentic development tools are fast becoming one of those big-budget categories.
Surveys show that most professional developers now rely on AI coding tools at least weekly. ChatGPT and GitHub Copilot still lead in raw awareness, but Gemini and Claude are catching up quickly. Analysts describe coding agents as a concentrated but rapidly growing market, with the top few players controlling a large majority of usage and significant revenue per seat. If AI can reliably take on meaningful slices of software development, the upside is measured not in ad clicks, but in trillions of dollars of global software spend over time.
Anthropic’s models are increasingly tuned not just to autocomplete lines in an editor, but to act as long-running agents that plan work, call tools, execute shell commands, and refactor large codebases over hours. GitHub’s new Agent HQ explicitly treats Claude, Gemini and OpenAI models as interchangeable coding agents inside the same “mission control” interface. In a world like that, enterprises will not pick a vendor based on vibe. They will spin up three agents, run the same task, and stick with whichever model reliably ships fewer bugs. On today’s numbers, that is often Claude.
If enterprise coding is the most lucrative AI vertical right now, Anthropic is positioned exactly where OpenAI was supposed to be: at the point of highest willingness to pay, with a demonstrable quality lead.
The Squeeze: OpenAI In The Middle
Put those threads together and the shape of OpenAI’s problem comes into focus. On the “general intelligence” front, Gemini 3 is at least competitive and arguably ahead in multimodal reasoning and deep integration with productivity tools. On the “serious coding” front, Claude Opus and Sonnet are winning head-to-head battles on real-world benchmarks and long-running agent tasks.
Meanwhile, the platforms where developers live are becoming model-agnostic. GitHub Agent HQ lets you pit OpenAI, Anthropic, Google, xAI and others against each other inside a single dashboard. Many modern toolchains and API brokers let you swap backends with a configuration flag. That erodes one of OpenAI’s early advantages: being the only serious option. If you can experiment with three frontier models in a week, you are going to keep whichever one does your job better per dollar.
Market share data suggests that OpenAI’s once overwhelming lead is narrowing. ChatGPT still holds the largest slice of consumer chatbot usage, but Gemini and Claude are growing faster, and Claude has already overtaken OpenAI in some enterprise segments. In other words, OpenAI is still huge, but no longer unchallenged or obviously ahead. In a winner-take-most market, being “one of the big three” instead of “the default” can feel like losing, especially if your cost structure and expectations were built around monopoly-level dominance.
Is OpenAI Actually Screwed?
If “screwed” means “doomed to die,” then no. OpenAI still has enormous strengths: a powerful brand, a mature ecosystem of fine-tuned GPTs, a tight relationship with Microsoft, and models that remain competitive in many tasks at aggressive prices. Even where critics find GPT-5 underwhelming, they often praise its cost efficiency and deep technical reasoning, which still matters for plenty of workloads. OpenAI is not about to vanish from the map.
But if “screwed” means “no longer holds a clear, defensible lead in any of the most lucrative AI verticals,” the case looks much stronger. In general multimodal chat and agents, Gemini has roared back from an embarrassing start to at least parity and sometimes superiority, with the force of Google’s distribution behind it. In enterprise coding and long-running agentic workflows, Anthropic has leveraged its underdog focus into measurable dominance on real-world benchmarks and growing enterprise share.
OpenAI is still a giant. It is just no longer the giant. In that sense, the scoreboard for this moment in the race looks brutally simple:
Huge loser, OpenAI. Winners, Google and Anthropic.
Disclaimer: This is not investing advice