all wars
Active conflict

OpenAI vs. Anthropic

The safety schism that became the industry's primary fault line.

Party 1OpenAI · the deployment-first lab
Party 2Anthropic · the safety-first lab
Kernel

Anthropic exists because its founders — Dario and Daniela Amodei, Tom Brown, Jared Kaplan, and most of OpenAI's original safety team — left OpenAI in 2021 over a disagreement about how seriously the lab was taking alignment. Five years later the resulting rivalry is the principal cultural and commercial axis of the AI industry. Both labs ship at the frontier; their disagreements about how to do so have set the terms of debate for the entire field.

§ 01

Frontline

Model capability (GPT-4, Claude 3, GPT-5, Claude 4). Enterprise contracts (the same Fortune 500 buyers, the same federal agencies). Safety research output (papers, interpretability artifacts, public statements about risk). Talent (a tight bidirectional flow that occasionally turns into a hiring war). Capital (Microsoft for OpenAI; Google + Amazon for Anthropic).

§ 02

Doctrine — OpenAI

Beneficial AGI requires being the lab that ships it. Deployment teaches you what's true about the technology faster than theoretical research alone. The right strategy is to be early, fast, and at scale — and to let the safety work be carried by the operational discipline of being the lab the world is actually using.

§ 03

Doctrine — Anthropic

If frontier models are dangerous, the right move is to be one of the labs at the frontier — but to be the one that prioritizes interpretability, refuses certain deployments, and builds Constitutional AI as a default. The safety community should staff the frontier rather than pause it. Public communication should be calmer than the technology warrants.

§ 04

Stakes

Whoever wins enterprise distribution by 2027 sets the cultural register of AI for the second half of the decade. Whoever wins the technical-interpretability race owns the moral high ground inside the field. The 2023 OpenAI board crisis showed that the safety-faction-aligned governance of OpenAI was structurally weak; whether Anthropic's commercial growth lets it preserve its founding posture is the open question.

§ 05

Outlook

The most likely 2027 outcome is bipolar, not unipolar: two frontier labs with similar capability profiles, distinguishable mainly by deployment posture. The deeper question is whether the U.S. closed-frontier-lab duopoly survives Chinese open-weight efficiency at all. If DeepSeek-class training continues to compress costs, the OpenAI–Anthropic rivalry may become a sideshow in a larger open-vs-closed war.