When Foundation CEO Mike LeBlanc made the rounds last month pitching autonomous combat drones, he led with the argument that tends to land best in polite company: these machines will save American lives. Pilots won’t be shot down. Soldiers won’t bleed out in forward positions. The robots go to battle, and the humans come home. Indeed, it is a reasonable argument. It’s also the kind of argument that has historically been used to justify things we later regret—because it stops the conversation precisely where the conversation actually needs to start.

As highlighted in a recent Time article, the real problem with autonomous weapons systems isn’t that they might malfunction. Likewise, it isn’t that a drone might misidentify a target, though that is, to put it gently, a valid concern. The problem is what happens to war itself when you remove the human body from the equation—and, with it, the last few friction points that make governments hesitate before starting one. Yet, according to LeBlanc, who recently sent two humanoid robot soldiers to the Ukrainian front line, “it’s a complete robot war” right now in Ukraine, with unmanned ground vehicles (UGVs) or ground-based robotic systems being used so much that “the robot is the primary fighter and the humans are in support.” Hmm. Haven’t heard that on the news.

Wars are politically expensive partly because they are physically expensive. Soldiers inevitably die, families grieve, and, sadly, American flags come home neatly folded. That unbearable cost has always functioned as a brake, imperfect and unbalanced as it is, when making decisions for conflict. But alas, here comes autonomous weapons. They don’t just reduce casualties—they reduce the political cost of choosing violence in the first place. When the machines bear the risk, the threshold for deploying them significantly drops. That is not a glitch in the logic. That is the logic. Noting that the aim is for its Phantom MK-1 robot to wield “any kind of weapon that a human can,” Foundation’s LeBlanc, a former major in the US Marine Corps and a Harvard Business School grad, clarified:

“We think there’s a moral imperative to put these robots into war instead of soldiers.”

With that in mind, there comes the accountability question, which the current legal architecture simply isn’t built to answer. Under international humanitarian law, someone is supposed to be responsible when a civilian dies in a strike—someone such as a commander, a soldier, or a state. But when an algorithm makes the targeting decision in a fraction of a second, responsibility diffuses into nothing. It is instead spread across a software team, a procurement office, a training dataset, and a chain of command that was technically “in the loop” but, in the way, a driver is in the loop when their car’s autopilot runs a red light. Meaning, no one is culpable. No one is tried. Essentially, the families of the dead are left with a software liability disclaimer.

And let’s not forget about the training data. There is not enough talk about this point. AI weapons systems learn to distinguish a combatant from a civilian and a threat from a non-threat by ingesting historical conflict data. Traditionally, that data was generated by humans who were themselves operating within flawed, biased systems. In other words, if the humans who produced that data had any kind of systemic prejudices baked into their decision-making, the AI learns those same patterns. Turning the task over to a machine does not remove bias and errors. Instead, the opposite is true. It is made worse because AI can now apply those patterns to far more decisions than any human ever could, and faster than any human oversight could realistically catch or correct.

What an AI military system also brings to humanity is an arms race that makes every previous arms race look orderly by comparison. And that is not an understatement. Yet, like gangbusters, the United States is spending massive sums in the AI arms race. And authoritarian regimes, including China and Russia, are also spending funds developing the technology. The logic of mutual deterrence, which at least produced decades of anxious stability during the Cold War, doesn’t translate neatly to these autonomous systems, because the calculus of who strikes first changes when the striking takes milliseconds, and the machines involved were never asked whether they were willing to die for anything. Think about that for a moment. The pressure to pre-emptively strike becomes overwhelming, and the window for diplomacy swiftly closes before the diplomats have really begun talking.

On top of how frightening the scenario is, it is worth noting for those who pay attention to who is funding this new arms race, the people involved. Why? Because it is a useful reminder that the people shaping this rapidly evolving technology are not abstract. They have names—like Eric Trump, a newly appointed chief strategic adviser and investor at Foundation—and these individuals stand to profit handsomely from a world in which starting conflicts is cheaper while stopping them is much harder.

Still, none of these points mean LeBlanc, or anyone, for that matter, is lying when talking about saving American lives. As a 14-year Marine Corps veteran with multiple tours of Iraq and Afghanistan, LeBlanc undoubtedly means it. Similarly, the engineers building these systems presumably believe in what they’re doing. No one wants to hear of American service members dying on the battlefield. And that’s often how it works—everyone inside the project has a sensible local justification, and nobody is standing at the top of the hill watching where the whole thing is moving. Yet, in this scenario, it feels potentially much more terrifying than we can yet imagine.

It also feels like what we are doing, right now, without anything resembling a serious public debate on the rise of AI soldiers, is removing human judgment from decisions about who lives and who dies. And we are doing it incrementally, with good branding, at a pace designed to ensure the norms lag so far behind the technology that catching up becomes functionally impossible. Like the dangerous mRNA technology slipped in during the COVID-19 pandemic, this sort of rapid advancement in technology that impacts all of society is the new norm. And, again, it is unsettling. Why?

There are no international treaties governing autonomous weapons systems. There is no agreed-upon definition of what “meaningful human control” even means in this context. There is a lot of defense contracting (Foundation already has research contracts worth a combined $24 million with the US Army, Navy, and Air Force), a lot of investor enthusiasm, and a pivotal word that keeps coming up in the marketing materials: decisive. The AI machines, we are told, will be decisive. And they will be. Still, the bigger question nobody seems to be asking out loud is, how do we determine accountability for what is lost when decisiveness no longer requires a human being to live with the actions of an AI soldier?

Generic avatar

Tracy Beanz & Michelle Edwards

Tracy Beanz is an investigative journalist, Editor-in-Chief of UncoverDC, and host of the daily With Beanz podcast. She gained recognition for her in-depth coverage of the COVID-19 crisis, breaking major stories on the virus’s origin, timeline, and the bureaucratic corruption surrounding early treatment and the mRNA vaccine rollout. Tracy is also widely known for reporting on Murthy v. Missouri (Formerly Missouri v. Biden), a landmark free speech case challenging government-imposed censorship of doctors and others who presented alternative viewpoints during the pandemic.