Pages

2 January 2019

What the future of artificial intelligence means for cybersecurity

By: Justin Lynch  

Future wars may be decided by computers fighting each other in cyberspace, somenational security experts argue. But others predict artificial intelligence brings false security to cyber defenses because the technology’s behavior can be hard to perfect.

Two new papers attempt to narrow that expansive philosophical divide and give an insight into how AI will be used for cybersecurity in the future.

Artificial intelligence-powered machines are "likely to become primary cyber fighters on the future battlefield,” Alexander Kott, chief scientist at the Army Research Lab, wrote in a Dec. 18 edition of the Army’s Cyber Defense Review.

“Today’s reliance on human cyber defenders will be untenable in the future,” Knott wrote, citing the growing complexity of future fighting. Those conflicts are made more difficult by exponentially more connected devices, robotic battles and other coming technological innovations.

Knott, whose research is specific to cybersecurity in future battlefields, suggested that current AI technology is not adequate. He argued that because of the fluid nature of fighting, the number of connected devices and deficiencies in mobile computing power, machine learning “must experience major advances to become relevant to the real battlefield.”

But Knott’s criticisms of current artificial intelligence capabilities on the battlefield mirror other complaints about the technology from the broader cybersecurity industry.

Most advanced threat-detection technologies use some form of AI to flag a potential incident, but experts have told Fifth Domain that an influx of false positives from this technique can create “threat fatigue” if not properly calibrated. For example, IT and security organizations receive roughly 17,000 malware alerts per week, according to a study by research firm Ponemon. Only 19 percent of those alerts are considered reliable, and only 4 percent are investigated, according to the research.

Another paper in the same Dec. 18 edition of the Army Cyber Review argues that using blended approaches to AI might enhance cybersecurity defense.

Most artificial intelligence can be categorized as either “symbolic” or “non-symbolic.”

Symbolic AI means the use of automated logic that has been pre-programmed, like decision trees.

Non-symbolic AI means the use of a program that is able to self-learn threats and identify anomalies.

Researchers at AI research firm Soar Technology have used both machine-learning concepts in a Navy project to boost cybersecurity defense.

“To realize the full potential of AI, we must integrate its various forms in order to offset the limitations of each,” wrote the paper’s authors, Scott Lathrop and Fernando Maymí, who both work for Soar technology. “No one approach will be sufficient because each approach is optimized for one specific set of problems at the expense of others.”

Researchers are already addressing these and other current limits of AI.

Examples include autonomous deception tactics, deep learning research that uses high-powered computers and autonomously distributing networks to prevent DDoS attacks.

In July, the Pentagon signed a five-year, $885 million contract with Booz Allen Hamilton, a defense contractor, for an AI project. Specific details of the project are not clear.

In addition, the Pentagon released its AI strategy in August. That document suggested that in the short term the Department of Defense should develop "a database that captures potential or known cybersecurity vulnerabilities of the various sensors” that are essential to machine learning.

No comments:

Post a Comment