Convolutional nets inventor, Meta's Chief AI Scientist
Yann LeCun
Profile
Yann LeCun is one of the three people who made modern deep learning possible, alongside Geoffrey Hinton and Yoshua Bengio — the trio shared the 2018 Turing Award for their work. In the late 1980s at Bell Labs, he built LeNet, the convolutional neural network that read handwritten zip codes for the US Postal Service and eventually processed a significant fraction of all checks in North America. Every image classifier, every face detector, every computer vision system descends from that architecture. If you use a camera with AI features, LeCun’s fingerprints are on it.
Today he is Chief AI Scientist at Meta, where he founded Facebook AI Research (FAIR) in 2013, and a professor at NYU. FAIR is the lab behind PyTorch and Llama — which means LeCun arguably did more than any other single scientist to make open-weight AI a viable counter to the closed labs. He is loud, French, combative, and operates on X like someone who enjoys a brawl. That matters, because his platform is how his ideas reach developers.
LeCun’s current crusade is against autoregressive LLMs. He argues — publicly, repeatedly, and with some glee — that they are a dead end for real intelligence. Next-token prediction, he says, cannot build a model of the world; it cannot plan; it cannot reason in the way a cat can. His alternative is JEPA (Joint Embedding Predictive Architecture), a framework for learning world models from video and sensory data without generating pixels. Whether or not he turns out to be right, the position is useful: it keeps the field honest about the gap between benchmark performance and actual understanding.
For a developer learning AI, LeCun is worth following for three reasons. He was there for the first breakthrough and can talk about why it worked in terms of the math, not the hype. He is betting heavily on open science at a company that has the compute to back it. And he is the loudest voice in the mainstream saying the current recipe is not enough — a view that is often uncomfortable but historically tends to age well.
Books
Quand la machine apprend LeCun's 2019 French-language book on the rise of deep learning, his personal history in the field, and where AI is headed. Not yet translated into English.Key Articles & Papers
Gradient-Based Learning Applied to Document Recognition Backpropagation Applied to Handwritten Zip Code Recognition Deep Learning A Path Towards Autonomous Machine Intelligence Efficient BackProp Dimensionality Reduction by Learning an Invariant Mapping I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture V-JEPA: Revisiting Feature Prediction for Learning Visual Representations from VideoControversies
- The 2020 bias debate. On Twitter, LeCun argued a biased output from a face upscaling model (PULSE) reflected biased training data, not a biased algorithm. Timnit Gebru and others pushed back that the framing was reductive. The exchange got heated, LeCun briefly left Twitter, and the episode became a recurring reference point in debates about how ML researchers discuss fairness.
- AI doom feuds. LeCun is openly dismissive of existential-risk arguments from figures like Eliezer Yudkowsky and sparring partners like Gary Marcus. He has called doomer scenarios “preposterously stupid” and argues open-source AI is safer, not more dangerous — a position that puts him at odds with much of the safety community but aligned with Meta’s release strategy.
- Friction with Elon Musk. Public exchanges on X about xAI, scientific integrity, and open research have been frequent and unflattering on both sides.
Spotify Podcasts