12 May 2025

Five Questions: Jim Mitre on Artificial General Intelligence and National Security


What do you see as the most plausible scenario for how AI develops over the next five years?

To be honest, I don't know. What we hear from a lot of the technologists working at the forefront of AI is that we might be on the threshold of some significantly more capable model, which they refer to as artificial general intelligence. This is plausible. It may happen—and because it would be of such high consequence if it does, it's prudent to think through what that would mean.

There are people in the tech world who are worried about how capable these models are becoming and sounding the alarm for the U.S. government to grapple with the implications. But they're a little out of their depth once they start weighing in on what that means for national security. On the other hand, there are a lot of people in the national security community who aren't up to speed on where this technology might be going. We wanted to just level-set everybody, to say, 'Look, from our perspective, AGI presents five hard problems for U.S. national security. Any sensible strategy needs to think through the implications and not over-optimize for any one.'
What would be an example of that?

There have been calls for the U.S. government to launch a Manhattan Project–like effort to achieve artificial general intelligence. And if you're focused on ensuring the U.S. has the lead in this technology, that makes perfect sense. But that might spur the Chinese to race us there, which would aggravate global instability. Some people have also called for a moratorium on developing these technologies until we're certain we can control them. That takes care of one problem—a rogue AI getting out of the box. But then you risk enabling China or some other country to race ahead and maybe even weaponize this technology.

No comments: