8 October 2025

‘Swarms of Killer Robots’: Why AI is Terrifying the American Military

Calder McHugh

Artificial intelligence technology is poised to transform national security. In the United States, experts and policymakers are already experimenting with large language models that can aid in strategic decision-making in conflicts and autonomous weapons systems (or, as they are more commonly called, “killer robots”) that can make real-time decisions about what to target and whether to use lethal force.

But these new technologies also pose enormous risks. The Pentagon is filled with some of the country’s most sensitive information. Putting that information in the hands of AI tools makes it more vulnerable, both to foreign hackers and to malicious inside actors who want to leak information, as AI can comb through and summarize massive amounts of information better than any human. A misaligned AI agent can also quickly lead to decision-making that unnecessarily escalates conflict.

“These are really powerful tools. There are a lot of questions, I think, about the security of the models themselves,” Mieke Eoyang, the deputy assistant secretary of Defense for cyber policy during the Joe Biden administration, told POLITICO Magazine in a wide-ranging interview about these concerns.

In our conversation, Eoyang also pointed to expert fears about AI-induced psychosis, the idea that long conversations with a poorly calibrated large language model could spiral into ill-advised escalation of conflicts. And at the same time, there’s a somewhat countervailing concern she discussed — that many of the guardrails in place on public LLMs like ChatGPT or Claude, which discourage violence, are in fact poorly suited to a military that needs to be prepared for taking lethal action.

Eoyang still sees a need to quickly think about how to deploy them — in the parlance of Silicon Valley, “going fast” without “breaking things,” as she wrote in a recent opinion piece. How can the Pentagon innovate and minimize risk at the same time? The first experiments hold some clues.

This interview has been edited for length and clarity.

Why specifically are current AI tools poorly suited for military use?

No comments: