fast.ai founder, practical deep learning for everyone
Jeremy Howard
Profile
Jeremy Howard is the closest thing the deep learning world has to a populist preacher. An Australian data scientist who co-founded fast.ai with Rachel Thomas in 2016, he built the most successful deep learning course on the internet by inverting how the field is normally taught. Instead of starting with linear algebra and working up to neural networks over a semester, fast.ai puts you in front of a working image classifier in lesson one. The math comes later, only when you need it. It’s the opposite of how Stanford does it, and for practitioners it works better.
Before fast.ai, Howard was president and chief scientist at Kaggle, where he topped global leaderboards on real problems years before “data scientist” was a job title. Earlier still he founded Fastmail, one of the few independent email providers still standing. In 2018 he and Sebastian Ruder published ULMFiT, which showed that transfer learning — pretrain a language model, then fine-tune it for your task — worked spectacularly well for NLP. The technique now underpins essentially every LLM you use; fine-tuning didn’t start with GPT, it started here.
In late 2023 Howard launched Answer.AI with Eric Ries (of Lean Startup fame), framed as “a new old kind of R&D lab” — small, distributed, focused on shipping practical things from research breakthroughs. The first big release was FSDP+QLoRA, which made it possible to fine-tune a 70B-parameter model on two consumer gaming GPUs at home. It’s the kind of work that perfectly matches Howard’s worldview: take something only the well-funded labs can do, and put it in the hands of regular developers.
He’s also one of the most consistent and well-argued voices against the regulatory capture of AI. His view, laid out in AI Safety and the Age of Dislightenment, is that licensing regimes for foundation models concentrate power in a handful of incumbents and make the world less safe, not more. Whether you agree or not, his arguments are the strongest version of the open-source case — and worth reading before forming an opinion on AI policy.
Books
Deep Learning for Coders with fastai and PyTorch Howard and Sylvain Gugger's book version of the fast.ai course — the canonical practitioner's introduction to deep learning, with all chapters available free as Jupyter notebooks.Key Articles & Papers
Universal Language Model Fine-tuning for Text Classification (ULMFiT) You can now train a 70b language model at home AI Safety and the Age of Dislightenment SB-1047 will stifle open-source AI and decrease safety A new old kind of R&D lab Enabling 70B Finetuning on Consumer GPUsControversies
Howard has been vocally critical of major AI safety legislation, particularly California’s SB-1047, and of the broader push for foundation-model licensing regimes. Critics — including some prominent AI safety researchers — argue he understates catastrophic-risk scenarios; supporters argue he’s the rare voice doing the work to interview economists, lawyers, and alignment researchers before forming a position. Either way, his arguments are substantive and worth engaging with rather than dismissing.
Spotify Podcasts