Shield AI co-founder, Navy SEAL turned defence AI builder
Brandon Tseng
Profile
Brandon Tseng co-founded Shield AI in 2015 with his brother Ryan Tseng and engineer Andrew Reiter, after seven years in the Navy — six of them as a SEAL officer deploying to Afghanistan and the Pacific. The origin story is well-worn by now: a 2012 mission in Uruzgan province where his unit took casualties clearing a building they couldn’t see inside. He wanted a better tool. He ended up building a company. He is currently President and has moved off the CEO seat — Gary Steele took over in December 2025 — but Brandon remains the public face and the policy voice.
Shield AI’s core bet is that the hard problem in modern conflict isn’t hardware, it’s autonomy. Their answer is Hivemind, an AI pilot stack designed to fly aircraft in GPS-denied, comms-denied environments — the kind of electronic-warfare soup that grounds remote-piloted drones in Ukraine. Hivemind flies Shield’s own V-BAT vertical-takeoff fixed-wing drone (acquired via Martin UAV), and they’re now pushing it onto a jet-powered X-BAT stealth “loyal wingman” concept revealed in October 2025. The company closed a $240M Series F in March 2025 at $5.3B, extended to $5.6B, and is raising a Series G at a $12.7B valuation with projected 2026 revenue north of $540M. Not a demo company.
For developers, Shield AI is one of the more interesting case studies in applied AI outside the consumer LLM bubble. Hivemind is essentially a reinforcement-learning-plus-classical-autonomy pipeline running on edge compute inside a flying thing that can’t phone home. It’s also the clearest example of what the phrase “AI pilot” actually buys you in production: course correction under jamming, cooperative behavior across a swarm, and mission completion when the human link drops. Shield is now trying to productize this as Hivemind Enterprise — a developer platform for third parties to build their own autonomy stacks on top.
Brandon is loudest on two things: that the U.S. procurement system is too slow to match adversaries iterating weekly, and that fully autonomous lethal weapons are a line nobody in the room actually wants to cross. The second claim is where he draws the most heat, and fairly so (see below). He’s a useful figure to watch because he sits at the intersection developers learning AI tend to ignore — defense, hardware, edge deployment — where the engineering constraints are real and the stakes are not hypothetical.
Key Articles & Papers
Testimony to the House Armed Services Committee on Technology & Innovation Statement for U.S. Senate AI Insight Forum on National Security Shield AI's founder on death, drones in Ukraine, and the AI weapon 'no one wants' The Revolution of AI-Enabled Autonomous Piloting with Shield AI's Brandon Tseng Shield AI raises $240M at $5.3B valuation to scale Hivemind Enterprise Shield AI unveils X-BAT: an AI-piloted VTOL stealth fighter Shield AI projecting $540M in 2026 revenue as valuation more than doubles to $12.7BControversies
Tseng is vocal that fully autonomous lethal weapons are a line “no one wants” crossed, and positions Shield AI as building the human-in-the-loop alternative. In February 2026 DroneXL pointed out the tension: Shield AI announced a partnership with South Korean defense contractor LIG Nex1 to equip V-BAT with guided missiles. Whether that counts as a contradiction depends on how strictly you read “autonomous” — Shield’s position is that arming a human-directed platform is categorically different from delegating kill decisions to the model. It’s a real debate worth tracking, not a gotcha, but the next few years of defense-AI ethics will turn on exactly where that line gets drawn.
Spotify Podcasts