Joseph F. Dunford
In the U.S. military and intelligence communities, we can’t cut corners. We must equip our highly trained people with the most advanced weapons and the most powerful technology available—because lives, missions, and the defense of freedom depend on it.
So why would we shortchange our troops and analysts by undermining artificial intelligence (AI), the very technology that’s quickly becoming one of the most decisive capabilities in the 21st Century?
Today, AI is beginning to deliver meaningful operational gains for the Defense Department (DoD) and the Intelligence Community (IC). AI is being used to simulate battlefield conditions in training, process vast amounts of information in support of decision making and intelligence, enhance cyber defense, enable real-time combat system updates, and field autonomous weapons systems.
If we underinvest in American AI or place heavy-handed restrictions on its development, we don’t just risk falling behind in innovation. We risk falling behind on the front lines.
Unfortunately, that risk is growing. Policymakers of both parties have proposed more than 1,000 AI rules at the state level, many of which, while well-intentioned, could slow the development of powerful, American-made AI models and hand our adversaries a lasting advantage.
Meanwhile, misguided efforts to rewrite longstanding fair use copyright principles or overly restrict the data AI systems can be trained on may sound like minor regulatory tweaks, but in reality, they strike at the heart of how these systems function. And their consequences for national security could be severe.
Why does training data matter so much? Because large language AI models (LLMs) don’t “reason” like humans—they learn patterns from the data they’ve seen. The broader and more representative the training data, the more accurately they can interpret complex scenarios, spot emerging threats, or generate mission-relevant insights.
No comments:
Post a Comment