ML paper reviewer, YouTube educator
Yannic Kilcher
Profile
Yannic Kilcher is a Swiss ML researcher who runs what is arguably the most rigorous machine learning channel on YouTube. While most AI YouTubers skim abstracts and parrot hype, Kilcher actually opens the PDF, walks through the equations, and tells you whether the math checks out. For developers trying to keep up with the flood of papers on arXiv, he is less a personality and more a filter — the guy who reads the paper so you can decide whether to.
He has a PhD from ETH Zürich and is CTO and co-founder of DeepJudge, a Swiss legal-tech startup building NLP tools for law firms. That day job gives his channel a grounding most pure-content creators lack — he ships production ML, and it shows in how he reads papers: with a builder’s eye for what’s actually reproducible versus what’s a cherry-picked benchmark result.
Beyond the paper reviews, Kilcher was a driving force behind OpenAssistant, the LAION-led community effort to build an open-source alternative to ChatGPT. Released in April 2023 with crowdsourced data from over 13,000 volunteers, it was one of the first serious attempts at a fully open conversational model. The project has since wound down, but the datasets it produced still float around the open-model ecosystem.
What makes Kilcher worth following is the willingness to be a skeptic in public. He calls out papers for over-claiming, highlights when reviewers missed obvious flaws, and runs a “ML News” format that covers the drama and politics of the field alongside the research. If you want a weekly signal on what’s actually happening in AI research — not just what Twitter says is happening — his channel is close to required viewing.
Controversies
In June 2022, Kilcher released GPT-4chan, a language model he fine-tuned on 134 million posts from 4chan’s /pol/ board, then deployed as a bot that posted over 15,000 messages on the forum without disclosure. He uploaded the model to Hugging Face, which later restricted access. A condemnation letter organized by Percy Liang and Rob Reich at Stanford was signed by hundreds of AI researchers, arguing the release violated norms around safety and human-subjects research. Kilcher framed the project as a prank and argued no concrete harm had been documented. The episode remains a reference case in debates about open-model release ethics.
Spotify Podcasts