DeepMind founder, Nobel Prize winner
Demis Hassabis
Profile
Demis Hassabis has one of the strangest CVs in tech: child chess prodigy, teenage game designer on Theme Park at Bullfrog, founder of his own studio (Elixir) at 22, then a PhD in cognitive neuroscience studying the hippocampus and memory. He founded DeepMind in London in 2010 with Shane Legg and Mustafa Suleyman on a mission that sounded absurd at the time — “solve intelligence, then use it to solve everything else.” Google bought the company in 2014 for around $500M, before DeepMind had shipped a product. They were buying the bet.
The bet paid off twice in public, spectacularly. In 2016, DeepMind’s AlphaGo beat Lee Sedol 4-1 in Seoul, a result most Go professionals had said was a decade away. Move 37 in Game 2 is still the canonical example of a neural network doing something humans found beautiful and alien at the same time. Four years later, AlphaFold 2 effectively solved protein structure prediction at CASP14, a 50-year-old problem in biology. In 2024, Hassabis and John Jumper shared the Nobel Prize in Chemistry with David Baker for that work. He was knighted the same year.
Today he runs Google DeepMind, the merged entity formed in 2023 when Google folded Brain into DeepMind under his leadership. He’s the person ultimately responsible for Gemini, for AlphaFold’s open database of 200M+ protein structures, and for Google’s response to OpenAI. The internal structure behind it — David Silver on reinforcement learning, Oriol Vinyals on large models, Jumper on biology — is a who’s-who of modern AI.
For a developer learning AI in 2026, Hassabis matters because he’s the counterexample to a certain narrative. While most of the field chased scaling transformers on web text, DeepMind kept grinding on reinforcement learning, self-play, and domain-specific architectures — and produced results that changed science, not just benchmarks. Gemini shows they can play the pure-LLM game too. If you want to understand how AI can be more than a chatbot, read DeepMind’s papers.
Key Articles & Papers
Human-level control through deep reinforcement learning Mastering the game of Go with deep neural networks and tree search Mastering the game of Go without human knowledge A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play Highly accurate protein structure prediction with AlphaFold Neuroscience-Inspired Artificial Intelligence Mastering Atari, Go, chess and shogi by planning with a learned model Accurate structure prediction of biomolecular interactions with AlphaFold 3 Gemini: A Family of Highly Capable Multimodal ModelsVideos
Spotify Podcasts