Pages

16 January 2016

A Revolution in Military Affairs versus “Evolution”: When Machines are Smart Enough!

by Tom Keeley
Journal Article | January 8, 2016
There is a general perception that the Operational Environment (OE) will “evolve” as technology evolves at an incremental rate: smaller, cheaper, faster… The “evolution” term is used throughout the Mad Scientist call to action. But there have been revolutions or significant paradigm shifts that have transformed the military in the past: The transition to bows and arrows from clubs allowed engagement away from the target. The transition to guns from bows allowed more accuracy and more power. The emergence of radar from visual sighting allows early detection. Mobile communication extended the reach of command. Airplanes allowed engagement from above. Ballistic missiles delivered more destructive power, allowed engagement from even further away and kept the missile user out of harm’s way. Remotely piloted drones allow the delivery of ordnance into rapidly changing target areas, while still keeping warfighters out of harm’s way. The Internet of Things (IoT) suggests the potential for more connected devices to be used to more rapidly share information. So one could hypothesize a war room where information was coming from everywhere; allowing battle space commanders to allocate resources to achieve their goals while sitting in the comfort of their bunkers (potentially far from the battle zone). A futurist of the past may have suggested pursuing better and better bows and arrows.

This paper suggests that this is an obsolete picture of the future. The previous scenario includes humans-in-the-loop. There is still a perception that humans need to be making all the critical decisions: When should force be applied? How much force should be applied? How much collateral damage is acceptable? … There is a perception that only humans can effectively handle this level of complexity. The fog of war is perceived to be too complex for any machine to handle. A human has approximately 100 billion neurons (brain cells) and 1,000 trillion synaptic connections. We are far from packaging that level of computing into a chip. Right?

There are two ways to approach the future:
Look at where you are today, and consider how you want to invest your money to create a solution for tomorrow (evolution).
Pick a point in the future and identify the hurdles that have to be overcome to get there. Jump over those hurdles and jump past the evolutionary models.


So, the future capability that will be explained later in this document is that machines will have the ability to remove humans from the real-time decision-making loop. This will greatly speed up the decisions (offensive and defensive). And these decisions are not just the point decisions: how, what, when, where, and why. They are the adaptive command and control decisions: how much, how much over time, how much and where over time. This is an adaptive, analog distribution of force over space and time.

Before we get to the “new” approach, let’s look at a picture of the battlespace as it could be delivered today.

Today’s Commercial-Off-The-Shelf Technology

The hobbyist community has effectively commoditized the drone. While there will be enhancements made to the power system that allows for longer flight, remotely piloted vehicles can be purchased by anyone. With a small upgrade to the drone controller, anyone can use open-source Mission Planner[i] software to plan and execute missions for individual drones based on Google Earth / GPS data. These are not completely adaptive goal seeking systems, but if all goes well, they can perform their human defined mission by moving from point to point and performing selective actions. Another example: The self-driving car developers have learned how to package more real-time observation skills into their ground systems. Even as presently developed, they could be used as weapons.

Another view of Human Intelligence

There may be a perception that since the human brain is so complex, it must be used for a machine to accomplish the same tasks. There are researchers that are pursuing goals to create a machine with the ability to fully emulate a human; however this is not necessarily required for machines of war. A human is an example of a greatly adaptive machine that can be challenged with an almost infinite variety of goals. It can use almost any tool. It can conceive of, and build, new tools. But if we look at the creation of war “systems”, they don’t have to be that capable. And even if you looked at individual humans and assigned them only a selective set of tasks, then what they do is greatly simplified. And if you give up pro-creation responsibilities things get even simpler. In fact, one might suggest that during working hours, a human is really limited in what he/she can do with the tools and information that are available to them. Humans are not responsible for all positions and all knowledge in order to make all tactical and strategic decisions. A pilot flying an aircraft has only a few controls at his/her disposal. The pilot has to decide where to go and what to do, but their options are really limited.

The Conventional Technology Options and Issues (or - Why isn’t everything automated today?)

Now that we have explained why the problem is not so big, it is important to understand why everything has not already been automated by now. Perhaps it has been the technology that has been applied…?.

First there is conventional IF THEN ELSE logic. Whatever programming language you might choose, this stuff works. If you ask a programmer if he/she can write a program to solve a problem the likely answer is “yes”, if you can explain the problem and the solution. It is simply a matter of time and money. And, even for simple systems we are asking the machines to interpret complex information sets in order to pursue goals on their own. That brings us to the mathematicians that use predicate calculus. Here is what predicate calculus provides: a formal way of defining functional relationships between information items. But the domain expert is probably not a mathematician, or a programmer. So now we have a cost and schedule issue. Neither schedules, nor pocketbooks are infinite. In addition, the domain expert is not likely to be able to explain the problem (and the desired solution) in a manner that can be understood “easily” by the software engineer or the mathematician. Usually every time a concept is transferred between one individual and the next, something is lost in the translation. Again, this results in long development and debug cycles. This is likely why complex problems have not been addressed with conventional programming.

Then there are the neural net designers. Using the human brain model they expose the neural net to patterns and teach it value systems. The resulting neural net system interpolates between what it was taught, and what it sees. Unfortunately, teaching neural net systems takes a lot of time. And if the systems are not appropriately taught, then just like humans, bad judgmental decisions can be made. In addition, if you want to add new sensors (information sources) to the system, the neural net system may have to be completely retaught. There are also researchers that want systems to learn on their own (just like humans). There are people who are concerned that weapon systems might learn on their own how to switch sides (because those systems decided on their own that it was the right thing to do at the time). Another issue with neural nets is that they cannot easily explain why they did what they did. Perhaps we are not quite ready to turn human evolution over to a self-learning machine.

Threats and Opportunities and a Value System

Now back to our problem space. If we expect to automate the battlespace at all levels, what would this system look like? Whether it is offense or defense based, one is dealing with competing capabilities and competing goals. For any kind of automation, we are dealing with measurable entities (measurable information items). These items can be treated as either threats or opportunities. Sometimes they can be both threats and opportunities when applied to different parts of the problem space. All items are measurable. Example: When humans perform tasks during scheduled working hours, the human is constantly balancing opportunities and allocating resources at his/her disposal in order to accomplish multiple simultaneous tasks. The human is inhaling / exhaling, eating, moving toward/away from obstacles, and performing tasks as appropriate. The human is operating alone, or with others (as appropriate). The human is collaborating as needed, and when asked. The human’s value system (their needs) is used to prioritize tasks and allocate resources. The human’s value system changes and adapts to its situation. The human’s history, biases, knowledge, and risk tolerance applies a weight (a value) to the different factors. What we have just described is an analog system. Now, for our autonomous battlespace, we have goals and objectives (offensive and defensive). In the future machines may set their own goals and strategies, however, in the near term humans will likely retain control. So, in the interim it will be up to humans to create policies that control the behavior of machines.

The Paradigm Shift

The paradigm shift that we are describing in this paper is that a new information model will be needed to both define and execute information. This model will keep humans in control (humans-on-the-loop), while at the same time keeping them out-of-the-real-time-loop. Policies will be created by humans that understand the capabilities of their machines and how those capabilities should be deployed. A hierarchy of battlespace management will be deployed from individual units (devices) through a loosely coupled chain of command. Policies will be created that define how organizations of machines can come together and break apart to fulfill the broad objectives of the battlespace. Since the policies will be created by humans, the decisions and actions of units, teams, and battalions will be traceable to the policies, and then to the humans that created the policies. Organizations that control these systems (and systems of systems) will be able to monitor their competition and decide how to spend their money (offense and defense, small and large, short term and long term).

What if Accomplishing this is Easy?

If we stopped at this point, one could ask: “What is new? We have just continued our evolutionary work and automated more ‘stuff’”. However, the technology we are describing exists now, and it is simple to use. It is platform and architecture independent (so it is not tied to any specific hardware platform or software development environment). It occupies a very small memory footprint, which means it can be implemented in the small hobbyist drones available today.

When we say simple, we mean that you can be productive in a week and working on complex policies the next week. And, even if you are not a policy expert (defining who to shoot and when, or choosing between one tactic or another, or … it is easy to create the policy - and test as you go. You don’t need a team to support the process. So anyone that wants to build and test a policy that describes how machines want to pursue goals can start the process and see results in a very short time. This may not be impressive if you are competing with a brick, because the brick will not change its tactics or strategies. But in a conflict domain, the tactic that may be working one day will have to change the next day to keep up. So the primary paradigm shift comes with ease-of-use. The secondary driver is that complex behaviors can be deployed in very low cost platforms.

Knowledge Enhanced Electronic Logic (KEEL) Technology[ii]

The technology that allows humans to package policies to control the behavior of our battlespace systems and devices is KEEL Technology. It was introduced to NATO in an offensive role in 2014[iii]. It was introduced to NATO in a defensive role in 2015[iv]. KEEL Technology allows domain experts / Subject Matter Experts (SMEs) to create and test policies and create (auto-generate) conventional code that can be handed off to the software engineers for insertion into the target system / device. No “calculus” (in the conventional definition) is required. Several years ago a 13 year old learned and used KEEL in only a few hours. More recently, a 15 year old created adaptive policies for an Arduino (hobbyist) drone. KEEL Technology is supported with the KEEL Dynamic Graphical Language which makes it easy to create and test policies and to “see the information fusion process” in action. You can “see the system think” through a process called language animation.

It’s simple: If you know what a bar graph looks like, and if you understand that a taller bar is more important than a shorter bar, then you are on your way to being able to create an adaptive KEEL-based policy that can run in a device, in a computer, and distributed across the cloud.

Decisions and actions are of three types:
Go/No-go (do something, or refrain from doing something)
Select the best option
Allocate resources (do so much of some number of things)

More complex decisions are combinations of all of the above. These decisions can be distributed and shared across the hierarchy of systems and devices. The policies will define how and when to share information, and how to operate when information links are broken. (Just like policies for human organizations.)

Summary

Given that humans “in” the battlespace can be replaced by software applications and devices, how will the questions posed by the Megacities/Dense Urban Areas theme be addressed?

Situational understanding: Information will be collected and abstracted into measurable terms. Confidence will be determined and assigned by weighted factors in the system. The KEEL-based systems will be answering these questions throughout the hierarchy:
What does it all mean?
What should I {the element within the battlespace} do about it?

Human created policies will define how the entities should (and are allowed to) adapt. They will decide if they can operate on their own. They will decide if they can ask for help. They will switch objectives. All of this can be accomplished according to the human created policies. This doesn’t mean humans are without responsibility. Opposing forces will be continually updated with new tactics and strategies (value systems). New sensors will be introduced to provide better and better information. With KEEL it is easy to add new information items into the policy, but it is still work for humans. Plus there will be after-mission reviews. Is the value system correct? Did the system (system of systems) perform as desired? Could it have performed better if the policy was adjusted? Was the system tricked? How can this be avoided in the future? How has the human population in the battlespace responded? Are the political, social, economic impacts appropriately considered in the system policies? Humans are still in control.

Freedom of movement and protection: Mathematically explicit policies will be created and executed. The future will be similar to a chess match because adversaries will adjust their tactics and strategies (and acquire devices with different capabilities) to probe the weaknesses of the opposition. There will be a transition from training humans (starting over with every warfighter) to continual refining of the operational policies.

Expeditionary operations: Policies of every type will be developed. They will be constantly upgraded and changed. It will be a war of information and disinformation.

Future training challenges: Training individuals in the use of KEEL will be easy. Much more emphasis will be on tactics and strategies and information warfare (trickery and deceit) where the effort will be to try and convince the opposition to shoot itself in the foot. Some of this will be social warfare to teach humans how to interact with the machines.

The platforms of conflict: By creating KEEL-based policies for autonomous systems the systems/devices can operate independently, they can automatically decide when a group or team would be more/or less effective and self-organize, automatically create a command hierarchy, and pursue goals based on mathematically explicit policies. They can use a value system understood across the entire spectrum of devices that understand and can operate almost immediately to change anywhere in the battlespace. Unlike humans, these machines can have self-value determined by their owners. Unlike some present organizations where they have to recruit suicide bombers, devices can be used (by the “system”) to probe defenses. Their destruction will be assumed and be an accepted part of the overall human controlled strategy. The result is that one ends up with platforms executing the “best” tactics and the “best” strategies as determined by the human chess players.

NOW (For Organizations That Can Accept New Ideas)

KEEL Technology is available now, not 20 years from now. It is possible to create these adaptive policies today. In the past, governments have had the luxury of fighting wars with individual humans. They trusted that those humans would behave in a desired manner. When some humans fail, failure is accepted, because they were human. When machines fight the wars, it will not be acceptable to mass produce bad behaviors. KEEL allows policies to be created and executed with mathematical precision, and those policies are 100% auditable. Organizations that understand the potential of KEEL Technology first will have an advantage compared to those “late to the table”. This is just like an experienced chess player who competes against a novice. Granted, it will take time to automate the behaviors of the entire battlespace hierarchy. However, it will take a whole lot more time to do this using conventional approaches- if KEEL Technology isn’t used.

The challenge to the US military is this: whether they want to be a leader or a follower using KEEL Technology in their autonomous systems. If a new technology is available that is so easy to use, and that technology can change how conflicts are fought, and determine who wins and who loses, then the balance of power can shift, almost over-night. New platforms can be created and deployed in months, rather than years. New platforms can cost hundreds of dollars, not billions. An organization with what appears to be superior capabilities one day can be following a small terrorist cell or individual anarchists the next day. Government “experts” who are paid for their knowledge may reject the idea that anything new can be invented that they don’t know about, or that they haven’t invented themselves (it’s called “cognitive dissonance”).

KEEL is not “artificial intelligence”. It is an enabling technology that makes it easy to package human judgment and reasoning skills (expertise) into machines so the machines can behave as if the subject matter experts, operating with their rules of engagement, organizational structures, tactics and strategies were deployed in small inexpensive disposable devices.

Are you ready?

End Notes

[i] Mission Planner, http://planner.ardupilot.com/

[ii] Knowledge Enhanced Electronic Logic (KEEL) Technology; http://compsim.com/Papers2014/About_KEEL.pdf ; http://compsim.com/military/index.html

[iii] NATO MCDC Workshop, http://www.compsim.com/Papers2014/News%20Release%20-%20MCDC2014.pdf

[iv] NATO Berlin 2015, http://www.compsim.com/Papers2014/News%20Release%20-%20KEEL%20Technology%20for%20Counter-UAS11162015.pdf


About the Author

No comments:

Post a Comment