Shaun Waterman
NATIONAL HARBOR, Md.—Now that the Air Force is starting to deploy artificial intelligence operationally, service leaders are grappling with AI’s limitations—not just what it can and cannot do, but the extensive data and technical and human infrastructure it needs to work.
That was the takeaway from industry experts and senior officers at AFA’s Air, Space & Cyber Conference on Sept 23.
At a recent experiment staged by the Advanced Battle Management System Cross-Functional Team, for example, vendors and Air Force coders used AI to do the work of “match effectors”—deciding which platforms and weapons systems should be used against a particular target and generating Courses of Action, or COAs, to achieve a military objective.
An AI algorithm was able to generate a COA in 10 seconds, compared to 16 minutes for a human, but “they weren’t necessarily completely viable COAs,” said Maj. Gen. Robert Claude, Space Force representative to the ABMS Cross-Functional Team.
“While [the AI] was much more timely and there were more COAs generated,” some did not take all necessary factors into account; for instance, proposing the use of infrared-guided weapons when the weather was cloudy, Claude told reporters.
“We’re getting faster results and we’re getting more results, but there’s still going to have to be a human in the loop for the foreseeable future to make sure that, yes, it’s a viable COA or no, we need just a little bit more of this to make the COA viable,” he explained.
The tendency of generative AI to “hallucinate,” or invent answers, is well understood, but other forms of AI have problems too, explained David Ware, a partner at consulting firm McKinsey, who moderated a panel on data and AI for decision superiority.
“Generative AI uniquely has a hallucination problem, but all AI models have problems with accuracy and bias,” he told Air & Space Forces Magazine in a brief interview following the panel.
No comments:
Post a Comment