28 September 2020

Inside the Army’s Fearless, Messy, Networked Warfare Experiment

BY PATRICK TUCKER

YUMA PROVING GROUND, Arizona—In the 105-degree heat of the southern Arizona desert, the Army has linked together experimental drones, super guns, ground robots and satellites in a massive test of its future warfare plans. 

On Wednesday, the service mounted the first demonstration of Project Convergence, bringing in some 34 fresh-out-of-the-lab technologies. The goal: to show that these weapons and tools—linked and led by artificial intelligence—can allow humans to find a target, designate it as such, and strike it — from the air, from kilometers away, using any available weapon and in a fraction of the time it takes to execute that kill today. It was an ambitious test that revealed how far Army leaders have come in their goal of networked warfare across the domains of air, land, space and cyberspace. It also provided a vivid picture of how much further the Army has to go.

The scenarios involved different phases of a land invasion. In the first phase, dubbed “Penetrate,” satellites in low Earth orbit detected enemy anti-air missile ground launchers. That info was passed to a ground processing station called the Tactical Intelligence Targeting Access Node, or TITAN, more than a thousand miles away at Joint Base Lewis-McChord in Washington state. The TITAN operator sent a target-data message to Yuma where a fire command was processed and sent to the Extended Range Cannon Artillery, or ERCA, the Army’s new 70-km super gun. Next, a scout helicopter — actually a surrogate for the Future Attack Reconnaissance Aircraft, or FARA — located the command-and control-node of the enemy air defenses, a wheeled amphibious armored personnel carrier, using an object-detection AI dubbed Dead Center onboard the drone. An Air Launched Effects drone, or ALE, launched from the helicopter, provided a floating mesh network beyond 50 km. An autonomously flying Grey Eagle drone swooped in at 300 feet — far below its normal operating floor of 10,000 feet or so — and hit the target with a Hellfire missile. 

With some of the key targets out of the way, the second phase, dubbed “Disintegrate,” sought to dissolve the remainder of the adversary’s anti-aircraft capabilities. Helicopters serving as surrogates for Future Vertical Lift aircraft and an ALE looked for other targets, passing their sensor data back through the mesh network. An artificial intelligence called FIRESTORM — short for or FIRES Synchronization to Optimize Responses in MDO — , took in the data, mapped the battlefield, and generated recommendations for which weapon should hit which target. The ERCA gun fired a round, hitting a multiple launch rocket system some 56 kilometers away. One Grey Eagle transmitted targeting data based on visual information — not GPS or laser designation — to another one, which attempted to hit the target with a GBU-69 glide bomb. But it’s unclear if the missile actually fired as the communication link was briefly lost. (The target was not destroyed.)

In the third and final phase, “Exploit”, manned and unmanned ground vehicles began to move into the area. Operators used Aided Target Recognition Software, or AiDTR, to find new targets like armored transport trucks. FIRESTORM tasked Next-Generation Combat Vehicles, or NGCVs (also played by surrogates in the exercise) to hit the targets. Another small drone called a Tarot, also with AiDTR, launched and detected enemy infantry fighting vehicles. FIRESTORM issued orders to suppress the enemy with mortars until they could be hit directly.

Operators in the NGCVs ordered unmanned ground vehicles to launch even smaller helicopter recon drones. As new enemies showed up, FIRESTORM sent recommendations about which weapon to employ.

The action was at times difficult to follow. Multiple things seemed to happen at once, in part by design. Some of the technologies displayed, such as the Grey Eagle’s low-altitude autonomous flying and the coordination among some of the drones, would have made for impressive demonstrations by themselves.

“We started six weeks ago with a lot more technology than we demonstrated today. It was a very deliberate process [of determining] things that would be ready [and] things that would not be ready,” said Gen. Mike Murray, the head of Army Futures Command.

In some cases, the Army overshot its goal, literally. In several instances, the video of the target after the launching of effects showed the target still standing.

“Aided target recognition, it’s brittle,” said Brig. Gen. Ross Coffman, director of the Next Generation Combat Vehicle Cross Functional Team and one of the main organizers of Project Convergence“We need more work, more sets, to continue to train and solidify that and do it on the move with rough terrain and stability systems. The air-to-air coordination and air-to-ground, that worked extremely well. The mapping worked very well. I’m very pleased. But we all have our eyes wide open,” Coffman said. “This is a first step. We now, no kidding, can look ourselves in the eye and say we know exactly where we’re starting.”

Murray acknowledged that “things that didn’t work perfectly. We missed a couple of targets.” But, he said, “That really doesn’t matter. All the things we wanted to work in terms of the ability to see, decide and act first [will be] going to be the key thing to winning on the future battlefield. The key elements of that, to my mind, worked perfectly today.”

Improving on Wednesday’s performance will require more people writing and analyzing code and data in real-time, closer to the action — and that’s not just for the experiment. One of the biggest changes the Army envisions for the way it fights is bringing a new type of soldier, trained in software development, data science and AI, to work and rework algorithms on, or very near, the front lines.

Lt. Gen. Charles Flynn, the Army’s deputy chief of staff, said the service needs “code writers at the edge” of battle because “the software, the algorithms change…The enemy is going to change things too. Their systems are going to change, so we have to have code-writers forward to be responsive to commanders to say, ‘Hey, that algorithm needs to change because it’s not moving the data fast enough.’ And if we own the code, like we own software, we can make those adjustments forward.” 

The process today for doing processing, exploitation, and dissemination of data, by contrast, sounds something like, “‘Hey, call back to this guy in Atlanta and have it changed, I need it in 12 hours,’” said Flynn. 

That’s not suitable for the accelerating pace of warfare, he said. “They can’t wait three hours to get that target into the targeting cycle to get it approved and go through some laborious process. It’s going to have to change instantaneously so we can stay ahead of the decisions adversaries are making.”

The Army is already setting out to train these soldier coders. Next January, some 25 personnel will begin to practice writing software, doing data science, closer to combat. Murray said that there’s high interest in joining the nascent training effort. “Cohort two starts in June with 30 slots. We’ve already had 15,000 people, uniformed soldiers, from major to specialist, express an interest in this,” he said.

Army Secretary Ryan McCarthy compared the push to get software writers closer to the front lines as similar to the move by many Wall Street firms in the early 2000s to locate their activities as close as physically possible to the stock exchange, in order to process trades milliseconds faster than competitors.

“They were literally fighting over office space in southern Manhattan to get closer to the exchange…We’re no different,” McCarthy said. “We have to get closer to the edge because the speed of finding a target and sending it to something that can process it, the speed of calling a fire mission, a medevac mission, that’s what we’re after.”

The Army experiment is just a portion of a broader, across-the-military push to integrate air, space, cyberspace, land and sea warfare. The Air Force has already done three of its own “on ramp” demonstrations of its growing Advanced Battle Management System, the service’s answer to the joint, all-domain warfare call. In contrast to the Army’s inaugural experiment this week, the Air Force’s efforts are mostly shielded from public view. Reporters are informed of the successes that each attempt yields after the fact. For the most part, journalists don’t learn of their failures, only what went right. 

The Army, conversely, took a risk in inviting reporters out to the desert to watch robots miss targets. But there’s a big difference between missing and failing. And there’s a difference between failing in an experiment and failing in a real-life fight. The former is a natural part of the process of discovery; the latter is failure in the truest sense of the word. Failure is inevitable, but, done correctly, also valuable. The idea, which has become accepted wisdom among tech innovators, is often expressed in that old Silicon Valley saw: fail cheap.

That advice is a lot easier to live by for a Stanford third-year with a couple million dollars in Series A funding than for the United States military, an organization that is much more accustomed to bragging on its successes and which sometimes wears its failures like a ghost does old chains.

Army leaders are just now learning to embrace the experimentational mindset, a mindset that McCarthy and Murray accept as necessary for innovation in information technology. The challenge is convincing the people beneath them to strive toward a different definition of success. “I’ve had some touchpoints with soldiers and they’re not used to failing,” said Murray. “It drives them crazy. You put these immature technologies in their hands and it doesn’t work exactly how you expect it to work, that’s part of the culture change.” 

That patience amid failure will be especially necessary for integrating AI into real-world operations. 

The history of AI is full of great thinkers who fall roughly into two camps, the neats and scruffies. The neats, students of theorem proving, long held that AI should be able to resemble human cognitive processes sooner rather than later by way of elegant statistical models and formulas. The scruffies, conversely, were largely roboticists, a subfield of AI where progress only happens through experimentation in 3-D space and where best intentions collide with the realities of physics. For this reason, scruffy researchers usually contend that advances in AI are likely to be gradual and unpredictable, somewhat like evolution. Rodney Brooks, inventor of the Roomba robot vacuum, and perhaps the most important roboticist living or dead, described the problem of innovation in robotics as a nearly Sisyphean slog, replete with disappointment, setbacks, dashed expectations, and misses. Brooks is fond of telling the story of how researchers in 1966, at the dawn of AI, thought that teaching robots to “see” patterns in physical space with camera data would be solved in a summer. As today’s poorly performing self-driving cars prove, machine vision remains an unsolved problem.

The Army may not be able to ever fail cheap. But it can fail cheaper. And it can derive greater reward for the effort. The Convergence demonstration bore out the premise of the project: data can be sensed from a wide variety of sources, fused, passed through machine intelligence, used to designate a target and then hit that target in a literal fraction of the time it takes the Army to execute those steps today, from 15 minutes down to less than one. The ability to do that will determine victory or defeat, life or death, in future conflicts.

No comments: