Somewhere right now, a drone is circling a battlefield, not waiting for a pilot’s joystick, but for a line of code to say: “fire.” Supporters call this progress. Critics call it a moral cliff. In this episode, we step right up to that edge and ask: who’s actually in control?
More and more, that fateful line of code is being written into systems that don’t just *assist* humans, but can *act* without asking permission. Militaries now train algorithms to sift satellite feeds, flag “suspicious” movements, and even match patterns of behaviour to likely threats. In some operations, the first “eyes” on a potential target are no longer human at all. And once you trust software to point at what matters, the temptation grows to let it press the trigger, too. Think of a busy kitchen where the chef begins by letting a smart oven handle the timing—then slowly lets it choose the temperature, the menu, even which orders to prioritise. At what point has the chef become a spectator? In armed conflict, that shift isn’t just technical; it reshapes responsibility, risk, and what we count as a “human decision” in war.
Today’s reality is more fragmented and messy than a simple “humans in vs. humans out” switch. AI now filters sensor data, suggests strike windows, predicts equipment failures, and hunts for cyber intrusions long before any shot is fired. Some systems just colour a map; others, like loitering munitions and sentry guns, blur into weapons that wait, watch, and move on their own. Legal reviews, rules of engagement, and battlefield training are scrambling to catch up. The real question is shifting from *whether* to use AI in conflict, to *how far* we quietly let it set the tempo and terms of combat.
A striking feature of the current debate is how different communities talk past each other. Engineers ask, “Can we make this reliable?” Lawyers ask, “Can this comply with international humanitarian law?” Ethicists ask, “Should this exist at all?” Soldiers, meanwhile, are asking a more visceral question: “Will this keep my people alive?” The friction between those questions is where the ethics of automated conflict really lives.
The legal frame sounds simple: any weapon must distinguish combatants from civilians, use force proportionally, and be used with “precautions in attack.” With AI-heavy systems, each of those duties gets smeared across a chain of people and code. If a loitering munition misidentifies a target because it was trained on biased data, who failed in their obligation—the commander who trusted it, the state that approved it, or the developer who tuned the model?
Supporters of greater autonomy point to potential *moral* advantages. Machines don’t panic, seek revenge, or get tired. An AWS that never fires outside its coded constraints might, in principle, cause fewer unlawful deaths than a frightened conscript. Some argue there could even be a “duty to deploy” such systems if they demonstrably reduce overall harm. This is the core of the “meaningful human control” debate: how much judgment must a human personally exercise for a decision to count as *theirs*, rather than the machine’s?
Critics respond that judgment in war isn’t just about correctly labelling pixels or radio signals. It involves interpreting surrender gestures, cultural cues, and context that current systems cannot reliably parse. They also fear an “accountability gap,” where every actor can plausibly say: “I followed procedure; the system failed.” That prospect undercuts not only legal responsibility, but the moral practice of owning up to wrongful harm.
There’s also a strategic concern: once some actors field faster, cheaper, partially autonomous systems, others feel pressure to match them or risk disadvantage. In deterrence theory, even small shifts in reaction time can change crisis dynamics. A state that believes its opponent’s defences are run at machine speed may feel compelled to automate more of its own responses, raising the risk that misclassification or spoofed signals could trigger unintended escalation. Whether we treat AI as a stabiliser or an accelerant of conflict will depend less on any single system, and more on the arms race logic we collectively choose to resist—or indulge.
Consider three snapshots. First: on the Korean DMZ, the Samsung SGR‑A1 doesn’t just point a camera; it can track a moving figure at two kilometres, challenge them, and—if configured—fire unless a human intervenes. Second: loitering munitions like the IAI Harop circle for hours, homing in on emissions that match a preset profile. Their operators choose the area and rules, but the final moment of descent can be triggered by pattern matches no person watches in real time. Third: software like Project Maven turns torrents of video into ranked “items of interest,” shrinking analysis time and subtly reshaping what commanders even see as options. Across these examples, the hard question isn’t simply whether a person is “in the loop,” but how their role quietly shifts: from decider, to supervisor, to someone who mostly approves what the system has already framed as the obvious, timely move. An AWS deciding when to fire is like an automated trading bot moving millions in milliseconds: formally overseen, practically driving the tempo.
Over the next decade, battlefields could start to resemble high‑frequency trading floors, where software probes for tiny advantages and reacts before humans can blink. States might demand “ethical black boxes” in every system, logging each micro‑choice for later investigation. Commanders could consult risk dashboards that forecast civilian harm like weather apps. Yet as these tools spread to militias and private actors, the real struggle may be agreeing who’s allowed to own such capabilities.
In the end, the question isn’t only which weapons we build, but which habits we build around them. Rules, verification regimes, and even “AI ceasefire” norms could matter as much as code, like kitchen hygiene quietly preventing disasters. Your challenge this week: study one real arms-control treaty and ask how it would need to stretch to cover software.

