Pages

15 May 2021

Tough Conditions and Contested Communication Are Forcing the US Military To Reinvent AI

BY PATRICK TUCKER

The era of artificial intelligence presents new opportunities for elite troops like the Army Rangers or Navy SEALs, but those opportunities are conscribed by some hard limits: for example, the power and connectivity of computers behind enemy lines, or the span of human attention in dangerous, stressful environments.

U.S. Special Operations Command, or SOCOM, is working with the Defense Advanced Research Projects Agency, or DARPA, on new projects and experiments to bring artificial intelligence to operators working in the sorts of environments where the computing power and data to run commercial AI applications aren’t present. Lisa Sanders, SOCOM’s director of science and technology for special operations forces, acquisition, technology, and logistics, told Defense One that in many cases that means re-inventing artificial intelligence from the ground up and developing completely new insights into how humans use it.

Much of the artificial intelligence that regular consumers use every day work by connecting the device to large cloud computing capabilities elsewhere. Perhaps the most prominent are digital assistants such as Siri and Alexa that derive their power from natural language processing, a fast-growing subset of AI that applies machine learning to spoken language. But there are hundreds of other AI tools that consumers use without even realizing it. When the map on your phone suggests re-routing to avoid a traffic jam, that’s AI at work. Most of the recommendation engines you come across on streaming video or music services can be considered artificial intelligence with narrow application. But most developers in this burgeoning field rely on being able to reach back through a network to huge databases and powerful cloud computing centers.

“The commercial world is used to being able to walk into a restaurant anywhere in the world, take a picture of the menu and hit ‘translate.’ But that presumes that you have access to a common set of readily available information about that language and ready access back to the cloud, because that’s not really processed in your handheld phone,” said Sanders.

That kind of connectivity is often lacking where U.S. forces operate, but AI could still make a big difference in achieving missions. So U.S. SOCOM is developing an entirely new understanding of not just how to expand artificial intelligence, but how to shrink it, determining which problems operators face could be overcome with a small amount of artificial intelligence.. That’s a fundamentally different challenge than the one that the commercial world faces as it develops AI for consumer uses.

Through a broad effort launched in 2019 to create what Sanders calls hyper-enabled operators, SOCOM has been looking to define what areas to focus on. “What are use cases where I can create some things that AI at the edge can process? Things like being able to tell a direction, a distance,” she said.

One of the challenges that arose early in SOCOM’s conversation with operators was translation, which hindered attempts to train Syrian opposition forces against ISIS. The U.S. is still a decade away from giving soldiers a handheld translator that works on obscure dialects and doesn’t need cloud connectivity to deliver fluency, Sanders said. So SOCOM is breaking the problem into smaller pieces, to create software tools that are useful now. That means anticipating what sort of communication is absolutely necessary.

“We’re doing a six-month effort where we are doing a representative language (not a high-density language)... to figure out how big that thesaurus needs to be. How much flexibility does it need to have to be operationally relevant? That’s an example where processing at the edge is going to limit us. Those analytics need power to run,” she says.

DARPA is working a similar set of challenges through several programs. The key, DARPA program manager John Waterston said, is to “give the computer the problems that are just beyond the scale of the human operator.”

One of his programs, called Phorcys, looks to integrate military tools like drones and sensors on commercial vessels. “There’s this multi-objective optimization that has to be done. To kind of say, what is the shipping traffic in the area? The surface picture? What is the mission you want to get done? The weather? The availability of communications and all these questions.” A big part of the program is figuring out what needs to be done by the computer and what needs to be done by a human, as well as what doesn’t actually need to happen at all. That’s not the sort of design choices programmers face in Silicon Valley, where off-platform cloud capabilities are always available.

Another program, called Ocean of Things, is intended to give operators a much better intelligence picture of what’s happening on the seas, through a wide distribution of floating sensors. But while the sensor network may be large, the amount of data the sensors send also has to be prioritized, saidWaterson, and small enough to fit into a tweet. “We’re using iridium small burst data, which is 240 bytes [similar to a tweet.] You have to have an edge device to extract the data and encapsulate it in 240 bytes.”

One way they’ve been able to do that is training an AI with pictures, so it can recognize and then ignore objects, rather than bug the operator with every new entity that shows up on the seascape. “We’ve trained a neural net ashore in the cloud with a bunch of relevant naval pictures. We take that neural net and only report the output if it's a person, ship, building, not a bird. Then we can extract that data and give it back to the operator ashore. If you do that processing right, you don’t overwhelm the tactical data links,” he said.

Tim Chung, a program manager at DARPA working to create highly autonomous subterranean robotics through the so-called SubT challenge, described the difficulty of finding “actionable situational intelligence,” and making sure both the human and the robot know the definition of what that is, since it’s hard to predict what the robot might encounter in, say, a network of underground tunnels or a collapsed building.

“It’s not just good enough to know [that] there’s a left turn, a drop, a corridor. What you really want are refined coordinates to where that survivor is located,” Chung said “‘Actionable’ is something that must be defined both by the robot as well as also the human supervisor in the loop, and so these robots must balance how much perception they carry with how reliant they are on communications.” Chung spoke as part of a recorded Defense One session on the future of battlefield AI that will air on Thursday.)

But it’s not just bandwidth that’s constrained in these environments. Human attention is also a scarce commodity. That’s why SOCOM is working with operators to better understand when they have more thought to give to incoming machine communication, Sanders said.

“If I am training a partner nation, the amount of information I can hold without becoming overwhelmed might be different than deployed in a covert location for three days and I know that there are bad guys right around the corner that are going to shoot me. That tradeoff of cognitive human machine burden is very fungible. It changes depending on the situation and the person,” she said. “We are gathering real life information from our warfighters and developing great advocacy with them…It’s an ongoing experimentation.”

No comments:

Post a Comment