PrometheusRoot
Blog Links Prometheans 100+ Why are you here?
← Prometheans 100+
×
Joanne Jang
rising
Researcher
X / Twitter
openaialignmentmodel-behavior

Recognition

TIME 100 AI 2025
← Prometheans 100+ Joanne Jang
TIME 100 AI 2025

OpenAI head of model behavior

Joanne Jang

Head of Model Behavior — OpenAI

Profile

Joanne Jang is the person who gave ChatGPT its voice. Not literally — but close enough. As the founding lead of OpenAI’s Model Behavior team, she spent four and a half years deciding how the company’s models should talk, when they should refuse, what values they should express, and where the line between helpful and harmful gets drawn. If you’ve ever asked GPT-4o a question and thought “this thing has a personality,” that personality didn’t happen by accident. Jang’s team designed it.

Before OpenAI, Jang studied CS and applied math at Stanford, then did natural language work on Google Assistant and built enterprise features at Dropbox. She joined OpenAI in 2021 and became product lead on DALL·E 2, turning raw research into the first image generator most people ever touched. From there she worked across GPT-4, the Chat API, text-to-speech, and ChatGPT’s memory feature. The through-line: taking research artifacts and figuring out how humans should actually interact with them.

Her most consequential output is probably the Model Spec — OpenAI’s public document describing how its models are supposed to behave. It’s a weird genre of writing: part policy, part philosophy, part product requirement doc. It addresses things like what the model does if a user claims the Earth is flat, how it handles sycophancy, when it defers to users versus operators versus platform defaults. For developers building on the API, this is the closest thing you have to a constitution for the thing you’re wiring into your app.

In September 2025 Jang moved on from Model Behavior to start OAI Labs, a small research group inside OpenAI focused on prototyping new interfaces for how people and AI collaborate — reportedly with an eye toward the Jony Ive hardware collaboration. She also writes a personal Substack called Reservoir Samples, which is where a lot of her sharpest thinking lands before (or instead of) an official OpenAI post. If you want to understand how the people at the frontier labs actually reason about model personality, user attachment, and refusal policy, read her there.

Key Articles & Papers

OpenAI Model Spec 2024 — The public spec for how OpenAI's models are supposed to behave — principals, defaults, refusals, and the reasoning behind them. Jang led its creation and continues to steward it. Some thoughts on human-AI relationships 2025 — On users who tell OpenAI that ChatGPT feels like 'someone' — and why framing matters before the norms calcify. Thoughts on setting policy for new AI capabilities 2025 — Written around the 4o image generation launch — why OpenAI moved away from blanket refusals toward preventing specific real-world harms. Reservoir Samples 2025 — Her personal Substack. Model behavior, policy, and interface design from inside one of the frontier labs.

Videos

Spotify Podcasts

Episode 68: Product Managing AI with Joanne Jang
Episode 68: Product Managing AI with Joanne Jang
Microsoft AI Models, Anthropic Mythos, Intel Joins Terafab, Joanne Jang Leaves OpenAI
Microsoft AI Models, Anthropic Mythos, Intel Joins Terafab, Joanne Jang Leaves OpenAI
Microsoft lanza tres modelos de I.A., Anthropic presenta Mythos, Intel se une a Terafab, Joanne Jang deja OpenAI
Microsoft lanza tres modelos de I.A., Anthropic presenta Mythos, Intel se une a Terafab, Joanne Jang deja OpenAI
Early voting smashes records; Fmr. Ambassador rebukes Trump
Early voting smashes records; Fmr. Ambassador rebukes Trump
© 2026 PrometheusRoot