Sydney J. Freedberg Jr
An 149th Intelligence Squadron airman conducts training in a computer training lab at Mather Field, California Dec. 2, 2023. The December 2023 UTA focus was on the readiness of the airmen by embracing their unit’s stated mission to “Organize, train, and equip Cyber-ISR leaders to provide intelligence in support of federal and state mission priorities.WASHINGTON — “The United States is in a race to achieve global dominance in artificial intelligence,” begins the introduction to America’s AI Action Plan, released this morning by the White House. The prime adversary in that race, other passages make clear, is China.
Overall, the Action Plan unambiguously frames AI as a winner-take-all competition between great powers.Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits,” the introduction continues. “Just like we won the space race, it is imperative that the United States and its allies win this race.”While most of the specific items in the Action Plan relate to domestic R&D, the economy, or international trade, it also includes a long to-do list for the Department of Defense. Of about 19 “recommended policy actions” explicitly citing DoD (the “Action Plan” is not an Executive Order and only “recommends” actions rather than directing them), five relate to the Department’s own internal workings. The others all give the Pentagon a prominent or even leading role in setting standards and coordinating efforts across the executive branch.
Particularly interesting is a call for DARPA, the Pentagon’s dedicated unit for high-risk, high-payoff R&D, to lead an interagency “technology development program” to address a fundamental problem with cutting-edge AI: Even the people who build it don’t really know how it works.That’s particularly true for generative AIs such as Large Language Models, which can summarize masses of data and even generate surprising insights but also frequently “hallucinate” and output falsehoods in unpredictable ways.
Technologists know how LLMs work at a high level, but often cannot explain why a model produced a specific output,” the Action Plan notes. “This lack of predictability … can make it challenging to use advanced AI in defense, national security, or other applications where lives are at stake.” (Arguably, it makes GenAI a tricky tool to trust even when the stakes are lower, as a recent Marine Corps field manual warns).
No comments:
Post a Comment