Cohere for AI lead, efficient models and accessibility
Sara Hooker
Profile
Sara Hooker spent most of her career arguing that the AI field has been asking the wrong question. While everyone else raced to scale models bigger, she kept pointing out that the winning ideas in AI haven’t won because they’re the best — they’ve won because they happened to fit the hardware lying around. That thesis, published as “The Hardware Lottery” in 2020, is still one of the most cited essays in the field, and it reframes how a lot of developers think about what’s actually possible versus what’s just currently convenient.
She came to AI sideways. Carleton College undergrad in economics and international relations, a first job as an economics analyst, founding Delta Analytics in 2014 to help nonprofits, then a PhD at Mila and a research scientist role at Google Brain in 2017. At Google she worked on interpretability and model compression, and founded Google’s first deep-learning research lab in Africa, based in Accra. That lab wasn’t a branding exercise — it was part of a longer project she’s been running about who gets to do AI research when the compute and data live somewhere else.
In 2022 she took over Cohere For AI, the nonprofit research arm of Cohere (co-founded by Aidan Gomez). Her flagship project there was Aya, an open multilingual LLM covering 101+ languages built with 3,000 researchers from 119 countries. Most frontier models are English-first with a thin layer of Chinese, Spanish, and French bolted on; Aya is what it looks like when you actually design for the rest of the world. TIME put her on the 100 Most Influential in AI list in 2024, and in 2024 she published “On the Limitations of Compute Thresholds as a Governance Strategy,” which argued the US AI Executive Order and EU AI Act were using FLOPs as a regulatory proxy that doesn’t correlate with risk.
She left Cohere in August 2025 and raised $50M for Adaption Labs, co-founded with former Cohere inference lead Sudip Roy. The pitch is a direct bet against the scaling consensus: smaller models that adapt in real time, running at a fraction of the cost. For developers building with AI, Hooker is a useful voice to follow because she’s consistently right about the unsexy stuff — efficiency, compression, multilingual coverage, what “risk” actually means — while most of the field is busy racing to the next benchmark.
Key Articles & Papers
The Hardware Lottery Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model On the Limitations of Compute Thresholds as a Governance Strategy What do Compressed Deep Neural Networks Forget? The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation Moving Beyond 'Algorithmic Bias is a Data Problem'Videos
Spotify Podcasts