PrometheusRoot
Blog Links Prometheans 100+ Why are you here?
← Prometheans 100+
×
Jason Wei
rising
Researcher
X / Twitter Website GitHub
chain-of-thoughtpromptinggoogleopenaireasoning
← Prometheans 100+ Jason Wei

Discovered chain-of-thought prompting

Jason Wei

Research Scientist — OpenAI

Profile

Jason Wei is one of a handful of researchers whose work directly shaped how modern LLMs think. At Google Brain, fresh out of Dartmouth, he was first author on three papers that any serious prompt engineer has already stood on the shoulders of: Chain-of-Thought Prompting, FLAN (instruction tuning), and Emergent Abilities of Large Language Models. Every time you ask a model to “think step by step” — or use any model fine-tuned to follow instructions — you’re downstream of his work.

In 2023 he joined OpenAI to work on reasoning and agents, and became a core contributor to the o1 model — the first major “think before answering” model, which baked chain-of-thought into training itself rather than leaving it as a prompting trick. In 2025 he left for Meta Superintelligence Labs, part of the widely reported wave of poaching that pulled several o1 contributors to Meta.

For developers, Wei matters because he’s one of the clearest thinkers about what works with LLMs and why. His blog posts on emergence, scaling, and prompting read like field notes from someone who actually ran the experiments — not hype, not hand-waving. If you want to understand why reasoning suddenly appeared at a certain scale, or why instruction tuning turned GPT-3 from impressive-but-weird into genuinely useful, his papers are where the answers live. He’s also surprisingly generous with writeups and talks — a researcher who explains, not just publishes.

Worth noting: along the way he co-authored Emergent Abilities with a cast that includes Jeff Dean, Percy Liang, and Oriol Vinyals — essentially a who’s-who of frontier LLM research circa 2022.

Key Articles & Papers

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models 2022 — The paper that proved 'let's think step by step' dramatically improves LLM reasoning. One of the most-cited prompting papers ever. Finetuned Language Models Are Zero-Shot Learners (FLAN) 2021 — Introduced instruction tuning — the technique that turned raw language models into instruction-following assistants. Direct ancestor of every chat-tuned model you use. Emergent Abilities of Large Language Models 2022 — Documented the phenomenon where certain capabilities appear suddenly at scale rather than smoothly. Controversial, influential, and essential reading for anyone thinking about scaling laws. 137 Emergent Abilities of Large Language Models (blog) 2022 — A running catalog of emergent behaviors in LLMs. Reads like a lab notebook — useful for grounding intuitions about what scale actually buys you. Learning @ Test-Time Is the New Pretraining 2024 — His post-o1 take on why shifting compute to inference-time reasoning is the next scaling frontier. Common Arguments Regarding Emergent Abilities 2023 — His response to Stanford's 'Are Emergent Abilities a Mirage?' paper. Good example of how researchers actually argue about evidence. Introducing OpenAI o1 2024 — The official launch post for o1, the reasoning model Wei helped build. Marks the shift from prompting tricks to reasoning baked into training.

Spotify Podcasts

Jason Wilde on Dontayvion Wicks & Will the Packers Regret the Trade + Shams vs the Bucks & Chris McIntosh Fallout! - on Jim, Matt & Molly
Jason Wilde on Dontayvion Wicks & Will the Packers Regret the Trade + Shams vs the Bucks & Chris McIntosh Fallout! - on Jim, Matt & Molly
Packers Trade Dontayvion Wicks?!  Jason Wilde Connects It to Locker Room Frustration
Packers Trade Dontayvion Wicks?! Jason Wilde Connects It to Locker Room Frustration
Jason Wilde on Packers Injuries: Micah Parsons Timeline, Tucker Kraft Return & Medical Staff Changes
Jason Wilde on Packers Injuries: Micah Parsons Timeline, Tucker Kraft Return & Medical Staff Changes
Jason Wilde on The Homer Hour
Jason Wilde on The Homer Hour
“They Don’t Care” -  Jason Wilde on NFL’s 18-Game Push & Player Safety
“They Don’t Care” - Jason Wilde on NFL’s 18-Game Push & Player Safety
431: Turkish folklore: These Dreams
431: Turkish folklore: These Dreams
428: Monkey King: The Pride
428: Monkey King: The Pride
430: Finnish folklore: Worn Out
430: Finnish folklore: Worn Out
429: Slavic folklore: The Devilkin Made Me Do It
429: Slavic folklore: The Devilkin Made Me Do It
432: Japanese folklore: Sword it Out
432: Japanese folklore: Sword it Out
© 2026 PrometheusRoot