Anthropic CEO, safety-first AI builder
Dario Amodei
Profile
Dario Amodei is the co-founder and CEO of Anthropic, the lab behind Claude. He trained as a computational neuroscientist (PhD from Princeton, postdoc at Stanford) before pivoting into AI, doing stints at Baidu under Andrew Ng and then at OpenAI, where he rose to VP of Research and led the team that built GPT-2 and GPT-3. In 2021 he left with his sister Daniela Amodei and a group of senior researchers — including Jared Kaplan, Chris Olah, Jack Clark, and Tom Brown — to start Anthropic, reportedly over disagreements about how seriously safety was being taken at the frontier.
What makes Amodei interesting to developers isn’t that he runs a big AI lab — it’s that he runs one with a distinct thesis. He co-authored the original scaling laws work that predicted what throwing more compute at transformers would do, so he’s not a safety-first idealist bolted onto a reluctant engineering org; he’s one of the people who proved the thing works. Anthropic’s bet is that you can be both the people warning loudest about existential risk and the people shipping the best model. Claude is the evidence. Constitutional AI, interpretability research, Responsible Scaling Policies, and the recent push on agentic systems (Claude Code, computer use, MCP) all come out of that same lab.
He’s also become the most articulate public communicator of the optimistic case for AI. His essay “Machines of Loving Grace” is the counterweight to doomer discourse — a serious person laying out what the next decade could look like if this goes well: compressed scientific progress, mental health breakthroughs, a cure for most diseases. It’s not hype; it’s specific. For developers learning AI, it’s the clearest statement of why bother you’ll find from someone actually building the models.
The tension at the heart of Anthropic — racing to build the thing you think might be dangerous, on the theory that it’s safer if you’re the one building it — is Amodei’s whole deal. You can find that philosophy infuriating or coherent, but it’s consistent, and it shapes everything from how Claude is trained to how Anthropic talks to governments. If you use Claude Code to ship software, you’re living downstream of those choices.
Key Articles & Papers
Machines of Loving Grace Core Views on AI Safety: When, Why, What, and How Scaling Laws for Neural Language Models Constitutional AI: Harmlessness from AI Feedback On DeepSeek and Export Controls The Urgency of Interpretability Concrete Problems in AI Safety Responsible Scaling PolicyVideos
Controversies
- Racing while warning: Critics argue the “safety lab building the dangerous thing to beat the other dangerous-thing builders” logic is self-serving — that Anthropic’s existence accelerates the race it claims to be slowing. Eliezer Yudkowsky and others have made this case directly.
- Policy influence and China framing: His public stance on export controls and framing of US-China AI competition has drawn pushback from researchers who see it as hawkish and commercially convenient for US labs.
- $100B+ fundraising and valuation: As Anthropic has raised enormous rounds from Amazon and Google, questions have been raised about whether a safety-focused lab can remain independent of hyperscaler interests.
Spotify Podcasts