PrometheusRoot
Blog Links Prometheans 100+ Why are you here?
← Prometheans 100+
×
Jack Clark
builder
FounderPolicy
X / Twitter
anthropicpolicyimport-ainewsletter

Related

pioneer Dario Amodei
← Prometheans 100+ Jack Clark

Anthropic co-founder, AI policy shaper

Jack Clark

Co-Founder — Anthropic

Profile

Jack Clark is the co-founder and Head of Policy of Anthropic, the AI safety company behind Claude. He’s one of the most influential voices bridging frontier AI research and public policy — the person who walks into Senate hearings, UN Security Council briefings, and White House meetings to explain what’s actually happening inside the labs. If you want to understand how AI policy gets shaped in Washington, Brussels, and the OECD, Clark is one of a handful of people who keeps turning up in the room.

Before Anthropic, Clark was Policy Director at OpenAI, where he joined in 2016 after a career as a tech journalist at Bloomberg and The Register. That journalist background is his superpower: he can read a paper, spot what matters, and write about it for humans. He left OpenAI in 2021 with Dario Amodei, Daniela Amodei, Chris Olah, Jared Kaplan, and others to start Anthropic — a move that reshaped the frontier lab landscape.

What makes Clark essential reading for developers is Import AI, his weekly Substack newsletter running since 2016 with ~70,000 subscribers. It’s three things stitched together: a digest of the week’s significant research papers, a policy analysis thread, and a short piece of speculative fiction. Nothing else on the internet covers AI with that mix of technical depth, geopolitical framing, and literary imagination. If you read one AI newsletter, this is the one working researchers and policy people actually read.

Clark was a founding member of the Stanford HAI AI Index (2017–2024), served on the inaugural US National AI Advisory Committee, co-chairs the OECD working group on AI system classification, and has testified repeatedly before Congress. In March 2026 he launched the Anthropic Institute, the company’s policy and governance research division. He’s opinionated and increasingly blunt — his October 2025 talk “Technological Optimism and Appropriate Fear” argued that AI researchers including himself are genuinely scared of what they’re building, and that pretending otherwise is a policy failure.

Key Articles & Papers

Import AI 431: Technological Optimism and Appropriate Fear 2025 — His closing talk at The Curve conference — arguing frontier AI researchers should hold optimism and fear simultaneously, and that recent model behaviors (situational awareness, goal-directed action) warrant serious concern. The most-shared thing he's written. Import AI newsletter 2016 — The weekly newsletter that made him famous — technical paper summaries, policy analysis, and a closing science fiction short. Running for ~9 years with ~70K subscribers. Import AI on Substack 2022 — The Substack mirror of Import AI — easier subscription, same content, searchable archive. The Malicious Use of Artificial Intelligence 2018 — Co-authored report with researchers from Oxford, Cambridge, and OpenAI on forecasting and mitigating AI misuse. An early, influential mapping of AI threat models before it was fashionable. Release Strategies and the Social Impacts of Language Models 2019 — The GPT-2 staged release paper. Co-authored the policy framework OpenAI used to justify not releasing the full model — still cited in today's open-vs-closed model debates. Stanford AI Index Report 2017 — Founding contributor to the annual AI Index from 2017–2024. The report used by governments and press to track AI progress — Clark's fingerprints are on its methodology. Written Testimony before the House Science Committee 2024 — One of several congressional testimonies — this one argues for government capacity in AI measurement and evaluation. Good primer on how he pitches AI safety to legislators. UN Security Council briefing on AI 2023 — Clark's remarks at the Security Council's first formal session on AI, arguing self-regulation is insufficient and multilateral testing standards are needed.

Videos

YouTube video

Spotify Podcasts

The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?
The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?
How Quickly Will A.I. Agents Rip Through the Economy?
How Quickly Will A.I. Agents Rip Through the Economy?
Jack Clark
Jack Clark
Jack Clark on AI's Uneven Impact
Jack Clark on AI's Uneven Impact
Anthropic Thinks AI Might Destroy the Economy. It's Building It Anyway.
Anthropic Thinks AI Might Destroy the Economy. It's Building It Anyway.
Dario Amodei — "We are near the end of the exponential"
Dario Amodei — "We are near the end of the exponential"
A Christian Lens - Jack Clark
A Christian Lens - Jack Clark
Jack Black
Jack Black
Jack Black
Jack Black
Pokemon Literally Shapes Your Brain | The Official Podcast
Pokemon Literally Shapes Your Brain | The Official Podcast

Related People

pioneer Dario Amodei
© 2026 PrometheusRoot