PrometheusRoot
Blog Links Prometheans 100+ Why are you here?
← Prometheans 100+
×
Geoffrey Hinton
legend
ResearcherEducator
X / Twitter Website Wikipedia
deep-learningneural-networksai-safetytorontonobel

Related

legend Yann LeCun pioneer Ilya Sutskever legend Yoshua Bengio
← Prometheans 100+ Geoffrey Hinton

The Godfather of Deep Learning

Geoffrey Hinton

Professor Emeritus — University of TorontoVP Engineering (2013-2023) — Google Brain

Profile

Geoffrey Hinton is the reason deep learning exists as a field. For four decades — through two “AI winters” when nearly everyone abandoned neural networks — he kept building them, training them, and proving they worked. In 1986 he co-wrote the paper that popularized backpropagation. In the 1980s he invented Boltzmann machines, borrowing from statistical physics. In 2012, with his students Ilya Sutskever and Alex Krizhevsky, his Toronto lab produced AlexNet — the ImageNet submission that cut error rates in half overnight and kicked off the modern AI boom. In 2024 he shared the Nobel Prize in Physics with John Hopfield for it all.

The student tree he grew at the University of Toronto now runs the industry. Sutskever co-founded OpenAI. Yann LeCun worked alongside him as a postdoc. Yoshua Bengio shares the “godfathers” title and the 2018 Turing Award with him. Ian Goodfellow (GANs), Russ Salakhutdinov, Chris Olah, Alex Graves, Nitish Srivastava, George Dahl — his lab is a directory of the people who built the field. Hinton himself sold his startup DNNresearch to Google in 2013 and spent a decade at Google Brain.

Then in May 2023 he quit Google — explicitly so he could speak freely about what he’d helped create. Since then he’s been the most credible voice warning about existential AI risk: smarter-than-human systems, misuse by bad actors, and mass job displacement. He estimates a 10–20% chance AI wipes out humanity. Not a doomer hermit making the rounds — the guy who built the thing, on 60 Minutes, saying he regrets his life’s work.

For developers learning AI today, Hinton is worth studying for two reasons. The obvious one: his papers (backprop, dropout, knowledge distillation, capsule networks, the Forward-Forward algorithm) are the foundation the rest of the field is built on. The less obvious one: he’s an object lesson in conviction. He was right about neural networks for thirty years while the field told him he was wrong. When someone with that track record starts warning about something, it’s worth at least hearing the argument.

Key Articles & Papers

Learning representations by back-propagating errors 1986 — The Rumelhart, Hinton, Williams paper that popularized backpropagation and made deep networks trainable. ImageNet Classification with Deep Convolutional Neural Networks (AlexNet) 2012 — The paper that ended the AI winter. With Krizhevsky and Sutskever — the shot heard round the field. Dropout: A Simple Way to Prevent Neural Networks from Overfitting 2014 — Regularization trick that's still standard practice. You've used it. Distilling the Knowledge in a Neural Network 2015 — Knowledge distillation — how to compress a big model's behavior into a smaller one. Foundation for every small/fast model today. A Fast Learning Algorithm for Deep Belief Nets 2006 — Layer-wise pretraining that kicked off the deep learning revival a full six years before AlexNet. Dynamic Routing Between Capsules 2017 — Capsule networks — Hinton's attempt to encode part-whole hierarchies. Didn't catch on, but the ideas influenced later work. The Forward-Forward Algorithm: Some Preliminary Investigations 2022 — Hinton proposing a replacement for backprop — two forward passes, no backward. NeurIPS keynote material. Reducing the Dimensionality of Data with Neural Networks 2006 — Autoencoders for dimensionality reduction. A Science paper that helped make deep nets respectable again. Nobel Lecture: Boltzmann Machines 2024 — His Nobel address — how statistical physics gave neural networks their learning rule.

Controversies

Hinton’s 2023 exit from Google and subsequent warnings about existential AI risk have put him in direct conflict with former students and colleagues, most notably Yann LeCun, who publicly dismisses extinction-risk framing as unfounded. The rift is real and ongoing — two of the three “godfathers” now strongly disagree on whether their creation might kill us. Hinton has also been criticized by some AI ethics researchers (Timnit Gebru, Margaret Mitchell) for focusing on speculative long-term risks while downplaying present-day harms like bias, labor exploitation, and concentration of power. He has since broadened his warnings to include near-term harms — job displacement, misuse, autonomous weapons — but the critique that he’s late to those conversations remains.

Spotify Podcasts

Five Decades of Neural Networks with Geoffrey Hinton
Five Decades of Neural Networks with Geoffrey Hinton
How Do Our Brains Work? with the Godfather of AI
How Do Our Brains Work? with the Godfather of AI
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
'Godfather of AI' Geoffrey Hinton Rings the Warning Bells
'Godfather of AI' Geoffrey Hinton Rings the Warning Bells
The Origins of Artificial Intelligence with Geoffrey Hinton
The Origins of Artificial Intelligence with Geoffrey Hinton
The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!
The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!
The Truth About AI They Don't Want You To Know - Sebastian Mallaby
The Truth About AI They Don't Want You To Know - Sebastian Mallaby
The A.I. Minecraft Experiment Is Disturbing.
The A.I. Minecraft Experiment Is Disturbing.
The Alien in the Room
The Alien in the Room
EP 1: Ready or Not
EP 1: Ready or Not

Related People

legend Yann LeCun pioneer Ilya Sutskever legend Yoshua Bengio
© 2026 PrometheusRoot