PrometheusRoot
Blog Links Prometheans 100+ Why are you here?
← Prometheans 100+
×
Eliezer Yudkowsky
builder
ResearcherPolicy
X / Twitter Website Wikipedia
miriai-safetyalignmentexistential-riskrationality

Related

builder Stuart Russell builder Max Tegmark
← Prometheans 100+ Eliezer Yudkowsky

MIRI founder, original AI doomer

Eliezer Yudkowsky

Co-Founder & Research Fellow — MIRI

Profile

Eliezer Yudkowsky has spent more than two decades arguing that smarter-than-human AI, built the way we’re currently building it, will kill everyone. For most of that time he was a fringe blogger. In 2026 his concerns sit on the front page of every major paper. He co-founded the Machine Intelligence Research Institute (MIRI) in 2000 and still works there as a Research Fellow. The field now called “AI alignment” is largely a field because he named it and spent twenty years refusing to let anyone stop thinking about it.

Before AI safety became mainstream, Yudkowsky built the rationalist community around LessWrong, where he wrote “The Sequences” — a sprawling body of essays on probability, cognitive bias, and how to think clearly under uncertainty. He also wrote Harry Potter and the Methods of Rationality, a 660,000-word fanfiction that smuggled Bayesian reasoning into a recognizable universe and became a recruiting pipeline for a generation of AI researchers now at OpenAI, Anthropic, and DeepMind. His intellectual fingerprints are on a large fraction of the current safety conversation, including the work of Paul Christiano and Jan Leike.

In March 2023 he published a Time op-ed arguing that pausing AI was not enough — we should shut it all down internationally, with willingness to strike rogue data centers to enforce it. Reaction split between “unhinged” and “prophetic.” In 2025 he and MIRI president Nate Soares co-wrote If Anyone Builds It, Everyone Dies, which hit the New York Times bestseller list and made the argument to a much wider audience. As of 2026 MIRI’s main work is communications — trying to get governments to understand what he thinks is coming.

For developers learning AI: you don’t have to share his P(doom) to benefit from reading him. “AGI Ruin: A List of Lethalities” remains the single clearest articulation of why alignment is hard — not tricky-engineering hard, but get-one-shot-and-most-plans-fail hard. Read him to stress-test your optimism. Other voices on this list — Stuart Russell, Max Tegmark, Gary Marcus, Connor Leahy — are in this conversation partly because Yudkowsky set the terms of it.

Books

If Anyone Builds It, Everyone Dies 2025 NYT bestseller co-authored with Nate Soares arguing the default outcome of building superintelligent AI is human extinction. Harry Potter and the Methods of Rationality A 660,000-word fanfiction that teaches Bayesian reasoning and the scientific method through a reimagined Harry Potter raised by a scientist. Rationality: From AI to Zombies The collected Sequences — essays on probability, cognitive bias, and reductionism that launched the rationalist community.

Key Articles & Papers

AGI Ruin: A List of Lethalities 2022 — The canonical list of why alignment is hard and why the default plans do not obviously work. Pausing AI Developments Isn't Enough. We Need to Shut it All Down 2023 — Time magazine op-ed calling for an international moratorium on large training runs, enforced by willingness to strike rogue data centers. The Sequences 2006 — Multi-year series on probability, cognitive bias, and reductionism — the foundational texts of the rationalist community. Coherent Extrapolated Volition 2004 — Early attempt to formalize what a Friendly AI should optimize for — not our stated values but what we would want if we knew more and thought faster. Intelligence Explosion Microeconomics 2013 — Technical case for fast takeoff: why recursive self-improvement could compress the transition to superintelligence into a short window. Artificial Intelligence as a Positive and Negative Factor in Global Risk 2008 — Early chapter-length argument that AI belongs on the list of existential risks, written before the deep learning era made the case obvious.

Controversies

The 2023 Time essay drew heavy criticism. The line about being “willing to destroy a rogue datacenter by airstrike” was widely read as advocating war to stop AI research, and many commentators — including sympathetic ones — called the position extreme. Yudkowsky maintains the framing was proportionate to the stakes. His long-running insistence on very high P(doom) estimates has also drawn accusations of unfalsifiability and overconfidence from within the AI safety community itself, including an EA Forum critique questioning how much weight his past predictions should earn.

Spotify Podcasts

#1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All
#1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All
#434 — Can We Survive AI?
#434 — Can We Survive AI?
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Destiny Raises His P(Doom) At The End
Destiny Raises His P(Doom) At The End
Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position
Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position
136 Sammy Helps Eliezer
136 Sammy Helps Eliezer
109 Tell Me About ...
109 Tell Me About ...
83.1 Shout Outs
83.1 Shout Outs
62 That One Mitzvah + Shout Outs
62 That One Mitzvah + Shout Outs
139 Gebrukst
139 Gebrukst

Related People

builder Stuart Russell builder Max Tegmark
© 2026 PrometheusRoot