Autonomous weapons are often pitched as the answer to the chaos of war. With advanced vision algorithms, they could, in theory, tell the difference between, for example, a school and a weapons depot better than any human. Moreover, unlike their mortal counterparts, they don’t loot, massacre, or act out of rage. In fact, some ethicists argue that robots might even follow the rules of war with the unwavering precision of a math equation. But here’s the kicker: for robots to protect civilians effectively, they might need a peculiar feature—the ability to say no.

There is no question that the Department of Defense (DOD) aims for full-spectrum dominance, striving to command every possible battlespace: land, sea, air, outer space, and cyberspace. To achieve this, the agency is leveraging artificial intelligence and other cutting-edge technologies, redefining the military and geopolitical order in ways never seen before. While some worry about AI propelling us into an era of violent global conflict, it’s worth remembering that these systems are ultimately shaped by the agendas of the governments, corporations, and individuals who wield them for political purposes. That, too, is a scary thought.

Let’s look at a potential future scenario. Picture this: An autonomous drone is sent to obliterate an enemy vehicle. As it approaches the target, it spots women and children nearby. With no human operator in sight (thanks to being deep behind enemy lines), the drone must make a serious choice. It would need to override its mission and effectively refuse the order to avoid tragedy.

Sounds reasonable, right? Amnesty International even critiques autonomous weapons for their inability to reject illegal orders, implying they should have that capability. But wait—this leads to a perplexing conundrum. Governments claim to keep humans “in the loop” for ethical AI use, but giving machines the power to say no undermines this premise. Worse, the same flaws that make machines fumble human orders could lead them to reject valid ones. Amnesty International wrote in 2023, in a report titled “More than 30 countries call for international legal controls for killer robots”:

“Autonomous weapon systems lack the ability to analyze the intentions behind people’s actions. They cannot make complex decisions about distinction and proportionality, determine the necessity of an attack, refuse an illegal order, or potentially recognize an attempt to surrender, which are vital for compliance with international human rights and international humanitarian law.

These new weapons technologies are at risk of further endangering civilians and civilian infrastructure in conflict. Amnesty International remains concerned about the potential human rights risks that increasing autonomy in policing and security equipment poses too, such as systems which use data and algorithms to predict crime.”

The conundrum certainly leaves a Catch-22 of “robot refusal,” according to the Bulletin of Atomic Sciences,” (BOAS) and leaves the military faced with two daunting options. They engineer autonomous weapons that are ethical and responsible, yet obedient. Or they create weapons with a reliable “no” function that aligns with the idea of keeping humans in charge. Failing to do the latter makes the promises of ethical killer robots seem as far-fetched as a budget surplus problem during an election year.

So, why is the idea of saying “No” a non-starter? After all, even countries with the strictest legal standards prefer their machines to be extensions of their will, not counterbalances to it. The idea of a robotic “conscientious objector” becomes even murkier when you realize that no military—even one with a spotless track record—wants its robots questioning every command. On the other hand, rogue regimes that willingly flout international law aren’t likely to invest in AI that will scold them for it.

Let’s not forget that, like all tools of war, killer robots are only as ethical as the hands (and minds) controlling them. A bullet doesn’t promise a cleaner fight—its wielder does. Similarly, autonomous weapons only reduce harm if those controlling them actually care about minimizing harm. Are there ethical risks of a “No Mode?” Undoubtedly, adding a refusal mechanism to killer robots comes with its own bag of worms because machines aren’t foolproof. Even with advanced capabilities, an autonomous weapon might confuse a legitimate target for a civilian one. Context matters and AI is notoriously bad at grasping nuance. This detail is significant.

Likewise, drones can be jammed or hacked. Enemies could camouflage military assets to trick the machine into refusing to engage. Remember in 2001: A Space Odyssey when the computer HAL 9000 refuses astronaut David Bowman’s orders to let him back in the ship since doing so would threaten the mission assigned to it by the upper command? Clearly, a real-life version of that would be horrifying, especially if it misinterprets the situation. BOAS noted that the following line—where JAL 9000 refuses David—is a poignant warning for our times, “I’m sorry, Dave. I’m afraid I can’t do that.

Is it possible for a middle ground to exist? Some experts propose a gentler alternative. They believe that instead of outright refusal, machines could politely ask, “Are you sure about that?” This pause for reconsideration could encourage commanders to think twice before making rash decisions. A bot might, for instance, suggest rethinking an airstrike in a crowded city or seeking a second opinion. While this wouldn’t work for drones operating independently, it could prevent avoidable disasters in other scenarios.

Still, the final question is based on the actual challenge and must be taken seriously. Can militaries handle AI systems that are more ethical than their commanders, without losing control? It’s an uncomfortable thought, but if states truly want to reduce the horrors of war with AI, it’s one they’ll have to confront. Because, as it stands, these so-called ethical killer robots are only as noble—or as reckless—as the humans who design, deploy, and command them.
https://x.com/mistman78/status/1872787205367972054

Generic avatar

Tracy Beanz & Michelle Edwards

Tracy Beanz is an investigative journalist with a focus on corruption. She is known for her unbiased, in-depth coverage of the COVID-19 pandemic. She hosts the Dark to Light podcast, found on all major video and podcasting platforms. She is a bi-weekly guest on the Joe Pags Radio Show, has been on Steve Bannon’s WarRoom and is a frequent guest on Emerald Robinson’s show. Tracy is Editor-in-chief at UncoverDC.com.