MIRI founder, original AI doomer
Eliezer Yudkowsky
Profile
Eliezer Yudkowsky has spent more than two decades arguing that smarter-than-human AI, built the way we’re currently building it, will kill everyone. For most of that time he was a fringe blogger. In 2026 his concerns sit on the front page of every major paper. He co-founded the Machine Intelligence Research Institute (MIRI) in 2000 and still works there as a Research Fellow. The field now called “AI alignment” is largely a field because he named it and spent twenty years refusing to let anyone stop thinking about it.
Before AI safety became mainstream, Yudkowsky built the rationalist community around LessWrong, where he wrote “The Sequences” — a sprawling body of essays on probability, cognitive bias, and how to think clearly under uncertainty. He also wrote Harry Potter and the Methods of Rationality, a 660,000-word fanfiction that smuggled Bayesian reasoning into a recognizable universe and became a recruiting pipeline for a generation of AI researchers now at OpenAI, Anthropic, and DeepMind. His intellectual fingerprints are on a large fraction of the current safety conversation, including the work of Paul Christiano and Jan Leike.
In March 2023 he published a Time op-ed arguing that pausing AI was not enough — we should shut it all down internationally, with willingness to strike rogue data centers to enforce it. Reaction split between “unhinged” and “prophetic.” In 2025 he and MIRI president Nate Soares co-wrote If Anyone Builds It, Everyone Dies, which hit the New York Times bestseller list and made the argument to a much wider audience. As of 2026 MIRI’s main work is communications — trying to get governments to understand what he thinks is coming.
For developers learning AI: you don’t have to share his P(doom) to benefit from reading him. “AGI Ruin: A List of Lethalities” remains the single clearest articulation of why alignment is hard — not tricky-engineering hard, but get-one-shot-and-most-plans-fail hard. Read him to stress-test your optimism. Other voices on this list — Stuart Russell, Max Tegmark, Gary Marcus, Connor Leahy — are in this conversation partly because Yudkowsky set the terms of it.
Books
If Anyone Builds It, Everyone Dies 2025 NYT bestseller co-authored with Nate Soares arguing the default outcome of building superintelligent AI is human extinction. Harry Potter and the Methods of Rationality A 660,000-word fanfiction that teaches Bayesian reasoning and the scientific method through a reimagined Harry Potter raised by a scientist. Rationality: From AI to Zombies The collected Sequences — essays on probability, cognitive bias, and reductionism that launched the rationalist community.Key Articles & Papers
AGI Ruin: A List of Lethalities Pausing AI Developments Isn't Enough. We Need to Shut it All Down The Sequences Coherent Extrapolated Volition Intelligence Explosion Microeconomics Artificial Intelligence as a Positive and Negative Factor in Global RiskControversies
The 2023 Time essay drew heavy criticism. The line about being “willing to destroy a rogue datacenter by airstrike” was widely read as advocating war to stop AI research, and many commentators — including sympathetic ones — called the position extreme. Yudkowsky maintains the framing was proportionate to the stakes. His long-running insistence on very high P(doom) estimates has also drawn accusations of unfalsifiability and overconfidence from within the AI safety community itself, including an EA Forum critique questioning how much weight his past predictions should earn.
Spotify Podcasts