AI ethics researcher, model cards creator
Margaret Mitchell
Profile
Margaret Mitchell is Chief Ethics Scientist at Hugging Face, and one of the researchers who turned “AI ethics” from a conference track into something every practitioner now touches. If you’ve ever filled out a model card on Hugging Face Hub, you’ve used her work. The 2019 paper Model Cards for Model Reporting — which she led while at Google — proposed that every released ML model ship with documentation describing intended use, training data, evaluation slices, and known limitations. It’s now a baseline expectation across the industry.
Her path into ethics ran through the engineering side first. She did her PhD at the University of Aberdeen, spent time at Microsoft Research working on computer-vision-to-language generation, and joined Google in 2016, where she founded and co-led the Ethical AI team with Timnit Gebru. The team’s mandate was to publish honest research about the risks of the models Google was building. That ended badly. Gebru was pushed out in December 2020 over the On the Dangers of Stochastic Parrots paper — the one that warned large language models were scaling past anyone’s ability to audit them. Mitchell co-authored it under the pseudonym “Shmargaret Shmitchell” and was fired by Google in February 2021. Both firings became a flashpoint: a big lab jettisoning its ethics leadership right as the LLM era was starting.
She landed at Hugging Face later that year and has built out the company’s ethics practice from the inside, which is a very different posture than the think-tank critique role. At Hugging Face she works on evaluation, data governance, model documentation tooling, and the nuts-and-bolts decisions about what gets hosted on the Hub. Recent work with Sasha Luccioni and colleagues includes the 2025 paper Fully Autonomous AI Agents Should Not be Developed, arguing that full autonomy is the wrong design target.
For developers, the reason to pay attention isn’t the controversy — it’s that Mitchell’s concrete contributions (model cards, dataset documentation, evaluation across demographic slices) are the kind of thing you’ll actually do if you ship models responsibly. She’s one of the few AI ethics figures whose work is as much engineering as advocacy.
Key Articles & Papers
Model Cards for Model Reporting On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Fully Autonomous AI Agents Should Not Be Developed No, 'AI' is not a Stochastic Parrot Machine Learning Experts - Margaret Mitchell (Hugging Face interview)Controversies
The Google firing in February 2021 is the defining episode. After Timnit Gebru was pushed out over the Stochastic Parrots paper, Mitchell used scripts to search her own Google email for evidence of how Gebru had been treated. Google cited “exfiltration of confidential business-sensitive documents” as grounds for termination; Mitchell and many outside observers characterized it as retaliation for defending Gebru. The episode became a case study in how large labs handle internal AI-safety critique, and accelerated the broader movement of ethics researchers out of big tech and into independent or mission-driven organizations.
Spotify Podcasts