Sakana AI co-founder, researcher turned builder
David Ha
Profile
David Ha (known online as hardmaru) is the co-founder and CEO of Sakana AI, the Tokyo-based research lab that became Japan’s fastest AI unicorn. He took an unusual path to AI: over eight years as a managing director at Goldman Sachs in Japan, co-heading fixed-income trading, before deciding markets were less interesting than neural networks. He joined Google Brain in 2016, stayed six and a half years, and ran the Brain team’s Japan office. Along the way he picked up a PhD in applied mathematics from the University of Tokyo.
Ha’s research taste is distinctive — he’s drawn to evolution, complex systems, and self-organization rather than the scale-is-all-you-need school. His 2018 paper World Models, co-authored with Jürgen Schmidhuber, showed an agent could learn to drive inside its own dream of an environment and transfer that policy back to the real one. It’s one of those papers that reads like a creative experiment, not a benchmark push, and it’s been hugely influential in reinforcement learning. His blog posts on neuroevolution and attention agents from the Google Brain years have a similar flavor: small, playful, elegant.
In 2022 he left Google for Stability AI as head of research, then resigned in June 2023 during the company’s chaotic period under Emad Mostaque. Within months he had co-founded Sakana with Llion Jones (one of the co-authors of Attention Is All You Need) and Ren Ito. The pitch: nature-inspired AI, built in Japan, outside the Bay Area monoculture. Sakana has shipped Evolutionary Model Merging (using evolutionary algorithms to combine open-source models into new ones), The AI Scientist (an end-to-end autonomous research pipeline), and the Continuous Thought Machine. It’s now valued north of $2 billion and designated a “national AI champion” by the Japanese government.
For developers learning AI, Ha matters because he’s a working counter-example to the idea that frontier AI can only come from giant US labs burning billions on pretraining. Sakana ships papers that are readable, weird, and occasionally beautiful — a reminder that there’s still research taste involved in this field, not just scale.
Key Articles & Papers
World Models Evolutionary Optimization of Model Merging Recipes The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery The AI Scientist-v2, Published in Nature Continuous Thought Machines Attention Agent: Neuroevolution of Self-Interpretable Agents Otoro.net — machine learning minimalismVideos
Controversies
Sakana’s AI Scientist has drawn sharp criticism. An independent 2025 evaluation found that 42% of its experiments failed due to coding errors, many “novel” ideas were actually well-established concepts the system failed to recognize, and some generated papers contained hallucinated numerical results. Critics — including Yale anthropologist Lisa Messeri and Princeton psychologist M.J. Crockett — argue that framing LLM pipelines as “autonomous researchers” risks narrowing science to what current AI can already do. A more balanced read: the tool produces rushed-undergraduate-quality papers for about $15 each, which is either impressive or alarming depending on where you sit. Ha has generally engaged with the criticism in public rather than dismissing it.