23 December 2023

Attack Robots, Terminators, Autonomous Weapons - Future of AI

KRIS OSBORN, WARRIOR MAVEN

In an interesting interview recently on CNN, former President Barack Obama was asked about the future of AI and the various philosophical, technological and ethic variables now dominating discussion as AI-technology explodes and many consider its implications. While being clear to emphasize that AI continues to bring paradigm-changing innovations to the world, he used succinct language to sum up what is perhaps the most significant complication or challenge when it comes to the application of AI .... "Machines can't feel joy," he told CNN.

Obama said this in the context of describing how the advent and rapid arrival of new applications of AI continue to change things rapidly bringing seemingly limitless new promise and also introducing challenges and complexities. He was quick to praise the merits of AI in his discussion with CNN, but also mentioned the challenges or limitations, given that uniquely human attributes such as emotion, devotion and other more subjective phenomena can't be approximated by machines. True enough, and while defense industry innovators and critical Pentagon institutions such as the Air Force Research laboratory are making progress exploring ways AI can somehow estimate, calculate or analyze more subjective phenomena, there are clearly many variables unique to human cognition, intuition, psychological nuances, ethics, consciousness and emotion which it seems mathematically-generated algorithms simply could not replicate or even begin to truly approximate accurately. This is why leading weapons developers are quick to explain that any optimal path forward involves a blending or combination involving what could be called the Pentagon's favorite term .. "manned-unmanned teaming."

This, however, does not mean the merits and possibilities of AI should be under-estimated, as senior researchers with the Army Research Laboratory have explained that "we are at the tip of the iceberg" in terms of what AI can truly accomplish. This is why the Pentagon is measuring the rapid success and promise of AI in the context of non-lethal defensive force. The combination of human decision-making faculties, when coupled with the speed and analytical power of high-speed AI-generated computing, is already creating paradigm-changing innovations. Imagine how many lives a defense AI-weapons system could save? AI is also already massively shortening the sensor-to-shooter curve in key modern warfare experiments such as the Army's Project Convergence.

These complexities are the main reason why there continue to be so many technological efforts to improve the "reliability" of AI-generated analysis, so that, through machine learning and real-time analytics, machines can determine context and accurately process new material that might not be part of its database. This, as described to Warrior by former Air Force Research Laboratory Commander Maj. Gen. Heather Pringle, is the cutting edge new frontier of AI.

“AI today is very database intensive. But what can and should it be in the future? How can we graduate it from just being a database to something that can leverage concepts and relationships, or emotions and predictive analyses? And so there's so much more that we as humans can do that AI cannot? How do we get to that?,” Maj. Gen. Heather Pringle, former Commanding General of the Air Force Research Lab, told Warrior in an interview earlier this year.

"Out of the Loop AI" Can Save Lives

Should something like a swarm of mini-drone explosives close in for an attack or a salvo of incoming hypersonic missiles approach at speeds five times the speed of sound, human decision makers simply might not be able to respond quickly enough. In fact, military commanders may not get any chance to counterattack or determine the best course of defensive action.

Not only would there not be time for a human decision-maker to weigh the threat variables, but weapons operators themselves may simply be too overwhelmed to detect, track, engage or fire upon high-speed simultaneous attacks should they receive orders. There just simply is not time.
Human in the Loop

The advent and rapid maturity of Artificial Intelligence for military technology, weapons and high-speed computing has many asking a pressing and pertinent question … just how soon until there is a “TERMINATOR”-type armed robot able to autonomously find, track, target and destroy targets without needing any human intervention?

The answer is that in certain respects that technology is already here … however there are a host of complex conceptual, technological, philosophical and policy variables to consider. Tele-operated armed robots have been in existence and even sent to war for many years now, meaning weapons systems remotely controlled by a human being without machines making any decisions or determinations about lethal force. This is fully aligned with current and long standing Pentagon doctrine which says there must always be a human in the loop when it comes to decisions regarding the use of lethal force. What about non-lethal force? This cutting edge question is now very much on the Pentagon’s radar, given the rapid maturation of AI-empowered decision-making abilities, analytics and data organization.

Essentially, should an AI-enabled system, which aggregates and analyzes otherwise disparate pools of incoming sensor data be able to accurately discern the difference between lethal and non-lethal force? Could AI-enabled interceptors be used for drone defense or a method of instantly taking out incoming enemy rockets, drones, artillery or mortars?

“Right now we don’t have the authority to have a human out of the loop,” Col. Marc E. Pelini, the division chief for capabilities and requirements within the Joint Counter-Unmanned Aircraft Systems Office, said during a 2021 teleconference, according to a Pentagon report published last year. “Based on the existing Department of Defense policy, you have to have a human within the decision cycle at some point to authorize the engagement.”

However, is the combination of high-speed, AI-enabled computer and sensor-to-shooter connectivity, coupled with the speed and sphere of emerging threats beginning to impact this equation? Perhaps there may indeed be some tactical circumstances wherein it is both ethical and extremely advantageous to deploy autonomous systems able to track and intercept approaching threats in seconds, if not milliseconds.

Speaking in the Pentagon report, Pelini explained that there is now an emerging area of discussion pertaining to the extent to which AI might enable “in-the-loop” or “out-of-the-loop” human decision making, particularly in light of threats such as drone swarms.

The level of precision and analytical fidelity AI-now makes possible is, at least to some extent, inspiring the Pentagon to consider the question. Advanced algorithms, provided they are loaded with the requisite data and enabled by machine learning and the analytics necessary to make discernments, are now able to process, interpret and successfully analyze massive and varied amounts of data.

Complex algorithms can simultaneously analyze a host of otherwise disconnected variables such as the shape, speed, contours of an enemy object as well as its thermal and acoustic signatures. In addition, algorithms can now also assess these interwoven variables in relation to the surrounding environment, geographical conditions, weather, terrain and data regarding historical instances where certain threats were engaged with specific shooters, interceptors or countermeasures. AI-enabled machines are increasingly able to analyze in a collective fashion and which response might be optimal or best suited for a particular threat scenario?

Can AI-enabled machines make these determinations in milliseconds in a way that can massively save lives in war? That is the possibility now being evaluated in a conceptual and technological sense by a group of thinkers, weapons developers and futurists exploring what’s called an “out of the loop” possibility for weapons and autonomy. This is quite interesting, as it poses the question as to whether an AI-enabled or autonomous weapons system should be able to fire, shoot or employ force in a “non-lethal” circumstance.

Of course there is not a current effort to “change” the Pentagon’s doctrine but rather an exploration, as the time window to defend forces and deploy countermeasures can become exponentially shorter in a way that could save lives in war should US forces come under attack. Technologically speaking, the ability is, at least to some degree, here, yet that does not resolve certain ethical, tactical and doctrinal questions which accompany this kind of contingency.

One of the country’s leading experts on the topic of AI & Cybersecurity, a former senior Pentagon expert, says these are complex, nuanced and extremely difficult questions.

“I think this is as much a philosophical question as it is technological one. From a technology perspective, we absolutely can get there. From a philosophical point of view, and how much do we trust the underlying machines, I think that can still be an open discussion. So a defensive system that is intercepting drones, if it has a surface to air missile component, that's still lethal force it misfires or misidentified. I think we have to be very cautious in how we deem defensive systems non lethal systems if there is the possibility that it misfires, as a defensive system could still lead to death. When it comes to defensive applications, there's a strong case to be made there……but I think we really need to look at what action is being applied to those defensive systems before we go too out of the loop,” Ross Rustici, former East Asia Cyber Security Lead, Department of Defense, told Warrior in an interview.

Rustici further elaborated that in the case of ‘“jamming” or some kind of non-kinetic countermeasure which would not injure or harm people if it misfired or malfunctioned, it is indeed much more efficient to use AI and computer automation. However, in the case of lethal force, there are still many reliability questions when it comes to fully “trusting” AI-enabled machines to make determination.

"The things that I want to see going forward is having some more built in error handling so that when a sensor is degraded, when there are questions of the reliability of information, you have that visibility as a human to make the decision. Right now there is a risk of having data which is corrupted, undermined or just incomplete being fed to a person who is used to overly relying on those systems. Errors can still be introduced into the system that way. I think that it's very correct to try to keep that human-machine interface separated and have the human be a little skeptical of the technology to make sure mistakes don't happen to the best of our ability," Rustici explained.

No comments: