The Godfather of Deep Learning
Geoffrey Hinton
Profile
Geoffrey Hinton is the reason deep learning exists as a field. For four decades — through two “AI winters” when nearly everyone abandoned neural networks — he kept building them, training them, and proving they worked. In 1986 he co-wrote the paper that popularized backpropagation. In the 1980s he invented Boltzmann machines, borrowing from statistical physics. In 2012, with his students Ilya Sutskever and Alex Krizhevsky, his Toronto lab produced AlexNet — the ImageNet submission that cut error rates in half overnight and kicked off the modern AI boom. In 2024 he shared the Nobel Prize in Physics with John Hopfield for it all.
The student tree he grew at the University of Toronto now runs the industry. Sutskever co-founded OpenAI. Yann LeCun worked alongside him as a postdoc. Yoshua Bengio shares the “godfathers” title and the 2018 Turing Award with him. Ian Goodfellow (GANs), Russ Salakhutdinov, Chris Olah, Alex Graves, Nitish Srivastava, George Dahl — his lab is a directory of the people who built the field. Hinton himself sold his startup DNNresearch to Google in 2013 and spent a decade at Google Brain.
Then in May 2023 he quit Google — explicitly so he could speak freely about what he’d helped create. Since then he’s been the most credible voice warning about existential AI risk: smarter-than-human systems, misuse by bad actors, and mass job displacement. He estimates a 10–20% chance AI wipes out humanity. Not a doomer hermit making the rounds — the guy who built the thing, on 60 Minutes, saying he regrets his life’s work.
For developers learning AI today, Hinton is worth studying for two reasons. The obvious one: his papers (backprop, dropout, knowledge distillation, capsule networks, the Forward-Forward algorithm) are the foundation the rest of the field is built on. The less obvious one: he’s an object lesson in conviction. He was right about neural networks for thirty years while the field told him he was wrong. When someone with that track record starts warning about something, it’s worth at least hearing the argument.
Key Articles & Papers
Learning representations by back-propagating errors ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) Dropout: A Simple Way to Prevent Neural Networks from Overfitting Distilling the Knowledge in a Neural Network A Fast Learning Algorithm for Deep Belief Nets Dynamic Routing Between Capsules The Forward-Forward Algorithm: Some Preliminary Investigations Reducing the Dimensionality of Data with Neural Networks Nobel Lecture: Boltzmann MachinesControversies
Hinton’s 2023 exit from Google and subsequent warnings about existential AI risk have put him in direct conflict with former students and colleagues, most notably Yann LeCun, who publicly dismisses extinction-risk framing as unfounded. The rift is real and ongoing — two of the three “godfathers” now strongly disagree on whether their creation might kill us. Hinton has also been criticized by some AI ethics researchers (Timnit Gebru, Margaret Mitchell) for focusing on speculative long-term risks while downplaying present-day harms like bias, labor exploitation, and concentration of power. He has since broadened his warnings to include near-term harms — job displacement, misuse, autonomous weapons — but the critique that he’s late to those conversations remains.
Spotify Podcasts