SYDNEY J. FREEDBERG JR.
WASHINGTON — Imagine a militarized version of ChatGPT, trained on secret intelligence. Instead of painstakingly piecing together scattered database entries, intercepted transmissions and news reports, an analyst types in a quick query in plain English and get back, in seconds, a concise summary — a prediction of hostile action, for example, or a profile of a terrorist.But is that output true? With today’s technology, you can’t count on it, at all.
That’s the potential and the peril of “generative” AI, which can create entirely new text, code or images rather than just categorizing and highlighting existing ones. Agencies like the CIA and State Department have already expressed interest. But for now, at least, generative AI has a fatal flaw: It makes stuff up.
“I’m excited about the potential,” said Lt. Gen. (ret.) Jack Shanahan, founding director of the Pentagon’s Joint Artificial Intelligence Center (JAIC) from 2018 to 2020. “I play around with Bing AI, I use ChatGPT pretty regularly — [but] there is no intelligence analyst right now that would use these systems in any way other than with a hefty grain of salt.”
“This idea of hallucinations is a major problem,” he told Breaking Defense, using the term of art for AI answers with no foundation in reality. “It is a showstopper for intelligence.”
Shanahan’s successor at JAIC, recently retired Marine Corps Lt. Gen. Michael Groen, agreed. “We can experiment with it, [but] practically it’s still years away,” he said.
Instead, Shahanan and Groen told Breaking Defense that, at this point, the Pentagon should swiftly start experimenting with generative AI — with abundant caution and careful training for would-be users — with an eye to seriously using the systems when and if the hallucination problem can be fixed.
Who Am I?
















