Anthropic co-founder, scaling laws discoverer
Jared Kaplan
Profile
Jared Kaplan is the co-founder and Chief Science Officer of Anthropic, and the physicist whose empirical work made the case for scale. Before AI, he spent over a decade as a theoretical physicist at Johns Hopkins, working on quantum gravity and field theory after a Stanford undergrad and a Harvard PhD under Nima Arkani-Hamed. That background matters: the scaling laws work reads like physics, not like ML folklore — curves that span seven orders of magnitude, fit by simple power laws.
In 2019 Kaplan joined OpenAI as a researcher and, together with Sam McCandlish, Dario Amodei, Tom Brown, Alec Radford and others, produced the Scaling Laws for Neural Language Models paper in January 2020. The claim was brutally simple: loss scales predictably as a power law in model size, dataset size, and compute. Architecture details barely matter. That single result is arguably the intellectual justification for every multi-billion-dollar training run that followed — including GPT-3, which landed six months later.
When the Amodei siblings left OpenAI in 2021 to start Anthropic, Kaplan went with them as a co-founder. At Anthropic he’s co-authored the core technical work behind Claude, including Constitutional AI, the framework that trains models to critique and revise their own outputs against a written set of principles rather than relying purely on human feedback. In October 2024 he took on the additional role of Responsible Scaling Officer — the person formally on the hook for deciding whether a new Claude model is safe enough to ship under Anthropic’s Responsible Scaling Policy.
For anyone building with LLMs today, Kaplan is worth paying attention to because he connects two things most people keep separate: the math of why bigger-is-better actually holds, and the operational question of when you should stop shipping bigger. He also kept his Hopkins professorship through all of it, which tells you something about the man.
Key Articles & Papers
Scaling Laws for Neural Language Models Scaling Laws for Autoregressive Generative Modeling Constitutional AI: Harmlessness from AI Feedback Anthropic's Responsible Scaling Policy Written Statement for the Senate AI Insight Forum
Videos
Spotify Podcasts