PrometheusRoot
Blog Links Prometheans 100+ Why are you here?
← Prometheans 100+
×
David Ha
builder
FounderResearcher
X / Twitter GitHub
sakanaresearchgenerativejapan

Recognition

TIME 100 AI 2025
← Prometheans 100+ David Ha
TIME 100 AI 2025

Sakana AI co-founder, researcher turned builder

David Ha

CEO & Co-Founder — Sakana AI

Profile

David Ha (known online as hardmaru) is the co-founder and CEO of Sakana AI, the Tokyo-based research lab that became Japan’s fastest AI unicorn. He took an unusual path to AI: over eight years as a managing director at Goldman Sachs in Japan, co-heading fixed-income trading, before deciding markets were less interesting than neural networks. He joined Google Brain in 2016, stayed six and a half years, and ran the Brain team’s Japan office. Along the way he picked up a PhD in applied mathematics from the University of Tokyo.

Ha’s research taste is distinctive — he’s drawn to evolution, complex systems, and self-organization rather than the scale-is-all-you-need school. His 2018 paper World Models, co-authored with Jürgen Schmidhuber, showed an agent could learn to drive inside its own dream of an environment and transfer that policy back to the real one. It’s one of those papers that reads like a creative experiment, not a benchmark push, and it’s been hugely influential in reinforcement learning. His blog posts on neuroevolution and attention agents from the Google Brain years have a similar flavor: small, playful, elegant.

In 2022 he left Google for Stability AI as head of research, then resigned in June 2023 during the company’s chaotic period under Emad Mostaque. Within months he had co-founded Sakana with Llion Jones (one of the co-authors of Attention Is All You Need) and Ren Ito. The pitch: nature-inspired AI, built in Japan, outside the Bay Area monoculture. Sakana has shipped Evolutionary Model Merging (using evolutionary algorithms to combine open-source models into new ones), The AI Scientist (an end-to-end autonomous research pipeline), and the Continuous Thought Machine. It’s now valued north of $2 billion and designated a “national AI champion” by the Japanese government.

For developers learning AI, Ha matters because he’s a working counter-example to the idea that frontier AI can only come from giant US labs burning billions on pretraining. Sakana ships papers that are readable, weird, and occasionally beautiful — a reminder that there’s still research taste involved in this field, not just scale.

Key Articles & Papers

World Models 2018 — The paper that made Ha's name — agents learning to act inside their own learned dream of an environment. Co-authored with Jürgen Schmidhuber. Evolutionary Optimization of Model Merging Recipes 2024 — Sakana's flagship technique: evolutionary algorithms that discover how to merge existing open-source models into more capable ones, no pretraining required. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery 2024 — End-to-end autonomous research pipeline — ideation, coding, experiments, writeup. Controversial but a genuine attempt at AI-driven science. The AI Scientist-v2, Published in Nature 2026 — The follow-up effort where an AI-generated paper survived real peer review at a major venue. Continuous Thought Machines 2025 — A new neural architecture from Sakana using neuron-level timing and synchronization — closer to biological computation than standard transformers. Attention Agent: Neuroevolution of Self-Interpretable Agents 2020 — Tiny agents with hard attention, evolved rather than gradient-trained — classic Ha: small, weird, and insight-dense. Otoro.net — machine learning minimalism 2015 — Ha's personal blog from the Google Brain era. Beautiful writeups on generative art, neuroevolution, and attention. Worth reading end to end.

Videos

YouTube video
YouTube video
YouTube video
YouTube video
YouTube video

Controversies

Sakana’s AI Scientist has drawn sharp criticism. An independent 2025 evaluation found that 42% of its experiments failed due to coding errors, many “novel” ideas were actually well-established concepts the system failed to recognize, and some generated papers contained hallucinated numerical results. Critics — including Yale anthropologist Lisa Messeri and Princeton psychologist M.J. Crockett — argue that framing LLM pipelines as “autonomous researchers” risks narrowing science to what current AI can already do. A more balanced read: the tool produces rushed-undergraduate-quality papers for about $15 each, which is either impressive or alarming depending on where you sit. Ha has generally engaged with the criticism in public rather than dismissing it.

© 2026 PrometheusRoot