PrometheusRoot
Blog Links Prometheans 100+ Why are you here?
← Prometheans 100+
×
Sergey Levine
pioneer
RoboticsResearcher
Website
berkeleyreinforcement-learningrobotics

Related

pioneer Pieter Abbeel builder Chelsea Finn
← Prometheans 100+ Sergey Levine

Defined deep reinforcement learning for robotics

Sergey Levine

Professor — UC Berkeley

Profile

Sergey Levine is the reason robots can learn. As a professor of EECS at UC Berkeley and a co-founder of Physical Intelligence, he sits at the exact intersection where deep learning stopped being a vision-and-text trick and started controlling motors. His lab — RAIL (Robotic AI & Learning) — has shipped a steady stream of the algorithms now used by basically every team trying to make robots behave intelligently from data instead of from hand-coded rules.

He came up under Pieter Abbeel at Stanford, then planted a flag at Berkeley in 2016. His 2013 Guided Policy Search and 2015 End-to-End Training of Deep Visuomotor Policies (with Chelsea Finn and Trevor Darrell) showed something most roboticists thought was years away: a single neural network mapping raw pixels straight to motor torques, trained end-to-end. Later work with Google on QT-Opt put that idea into a warehouse full of arms learning to grasp from millions of trials. If you read the RT-1 / RT-2 papers and wondered who built the foundations underneath, a lot of it traces back to him.

In 2024 he co-founded Physical Intelligence with Karol Hausman and others — a startup chasing a foundation model for robots the way OpenAI chased one for language. Their π0 and π0.5 policies pair a vision-language backbone with a diffusion-based action expert, trained on tens of thousands of hours of cross-embodiment data. The company has raised over $1B at a ~$5.6B valuation and recently demoed robots doing tasks they were never explicitly trained on. He still teaches and runs the lab in parallel.

For developers learning AI, Levine is unusually accessible: his CS285 lectures are the deep RL course that everyone outside Berkeley also takes, free on YouTube. If you want to actually understand policy gradients, Q-learning, model-based RL, and offline RL from someone who built half of it, this is the source.

Key Articles & Papers

Guided Policy Search 2013 — The algorithm that made it tractable to train deep neural network policies for high-dimensional control. Foundation for almost everything that came after. End-to-End Training of Deep Visuomotor Policies 2015 — First convincing demonstration of pixels-to-torques learning on a real robot. Changed what people thought deep learning could do for control. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation 2018 — Large-scale distributed RL for robotic grasping. The blueprint for data-driven manipulation at fleet scale. Soft Actor-Critic 2018 — Maximum-entropy off-policy RL algorithm that became the default workhorse for continuous control. Offline Reinforcement Learning: Tutorial, Review, and Perspectives 2020 — The reference text on learning policies from logged data without online interaction. Required reading if you care about RL on real systems. How to Train Your Robot with Deep Reinforcement Learning: Lessons We Have Learned 2021 — A pragmatic field report on what actually works when you point deep RL at physical hardware. π0: A Vision-Language-Action Flow Model for General Robot Control 2024 — Physical Intelligence's first generalist robot policy. VLM backbone plus diffusion action expert, trained on cross-embodiment data. π0.5: A VLA with Open-World Generalization 2025 — Extends π0 to mobile manipulation in entirely unseen environments. The argument that robot foundation models are starting to work.

Videos

YouTube video

Spotify Podcasts

Sergey Levine - Building LLMs for the Physical World - [Invest Like the Best, EP.465]
Sergey Levine - Building LLMs for the Physical World - [Invest Like the Best, EP.465]
Fully autonomous robots are much closer than you think – Sergey Levine
Fully autonomous robots are much closer than you think – Sergey Levine
Sergey Levine explains the challenges of real world robotics
Sergey Levine explains the challenges of real world robotics
#108 – Sergey Levine: Robotics and Machine Learning
#108 – Sergey Levine: Robotics and Machine Learning
#331 Sergey Levine: The Robot Revolution Nobody Is Talking About
#331 Sergey Levine: The Robot Revolution Nobody Is Talking About
#331 Sergey Levine: The Robot Revolution Nobody Is Talking About
#331 Sergey Levine: The Robot Revolution Nobody Is Talking About
Robotics Startup Founder Sergey Levine is Building Robots for Your Home (and Work) | AI in Motion
Robotics Startup Founder Sergey Levine is Building Robots for Your Home (and Work) | AI in Motion
Sergey Levine, UC Berkeley: The bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
Sergey Levine, UC Berkeley: The bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
π0: A Foundation Model for Robotics with Sergey Levine - #719
π0: A Foundation Model for Robotics with Sergey Levine - #719
Autonomous Robots: Closer Than You Think | Sergey Levine on AI, Robotics, and the Future of Work
Autonomous Robots: Closer Than You Think | Sergey Levine on AI, Robotics, and the Future of Work

Related People

pioneer Pieter Abbeel builder Chelsea Finn
© 2026 PrometheusRoot