Pages

1 January 2018

How DARPA sparked dreams of self-healing networks

By: Adam Stone 

Competitors in DARPA’s Cyber Grand Challenge applied artificial intelligence as a means to both attack and defend cyber resources. DARPA is looking to AI as a means to tackle the ongoing cyber threat. 

On an August day in 2016, computer security gurus traveled to Las Vegas to prove that artificial intelligence, or AI, could find and fix flaws in software at machine speeds.

There, teams battled for nearly 12 hours to see whose automated systems could best identify and patch vulnerabilities in software, with the top three prizes ranging from $750,000 to $2 million. Judges had winnowed a field of 100 teams to seven finalists for the Defense Advanced Research Projects Agency-hosted Cyber Grand Challenge (CGC) Final Event.

Nearly 18 months later, DARPA officials and winnings contestants agree: AI is fast emerging as a way to give Department of Defense agencies the edge in the ongoing cat-and-mouse battle for cyber supremacy.

“Current computer security is just too complicated,” said David Brumley. As CEO of Pittsburgh startup ForAllSecure, he helped lead the winning team. “There are too many processes involved. Downloading a patch, deciding to install it, making sure it doesn’t crash things. We need to reduce the human timeline, to make these responses happen in minutes rather than in hours or days.”

In launching the cyber event, DARPA challenged researchers to envision cybersecurity that is enhanced and empowered by machine learning. It’s an evolution that some outside the defense community have long been urging.

“Trading in stocks is now dominated by algorithms and human floor traders are largely superfluous. Why is this not a likely future for cyber conflict also and, if so, what are the implications for U.S. Cyber Command staffing and projects and overall U.S. cyber defense?” said Jason Healey, a researcher at Columbia University’s School of International and Public Affairs, in testimony last spring before the House Armed Services Committee.

At DARPA’s cyber event, the winning teams were able to implement AI as both an offensive and defensive tool. They taught the machines to quickly and accurately exploit known defenses in the systems of other teams, and also used machine intelligence to rapidly deploy appropriate patches to their own systems.

The likelihood of AI working in a cyber scenario was “a very open question” prior to the competition, said Dustin Fraze, a program manager in DARPA’s Information Innovation Office. “We didn’t know if it would work, but the results of the CGC show that we are on the path to where automation can in fact identify vulnerabilities and find remediations.”

If that is so, it could have profound operational implications for military cyber operators who today find themselves too often playing catch-up in the face of a rapidly evolving threat landscape.

In short, military leaders envision self-healing networks.

Fraze described how CGC competitors tackled a typical scenario, in which an operator detects a vulnerability in software and uses it to attack an opponent’s systems. The opponent deploys a patch and in the next round the aggressor identifies that patch and devises a counter-exploit.

“Within 20 minutes we saw discovery, a patch, mitigation of that patch and a mitigation of that mitigation,” Fraze said. In the field today, “such an interaction would take days or weeks or months, if it were done manually.”

This is not to suggest that AI is going to dramatically reshape military cyber tomorrow. More than a year after the DARPA challenge proved it could be done, researchers say they still need to make significant progress to be able to use machine learning in operations as a large-scale cyber asset.

Brumley cautions that any military use of AI for cyber will have to take into account the possibility that the enemy will take a counter-AI approach: If our computers can learn, the adversary might try to teach them the wrong things.

“If network security becomes about AI, then people will start to attack the AI. People will teach the AI bad things,” he said. “If you give AI control and it learns bad things, it can easily spiral out of control.”

In addition, there are still some areas where AI struggles to learn quickly. A computer can be taught to see something that is broken, for example, but it’s harder for a machine to recognize something that might break.

“If you have something like a logic error — something that is hard to quantify, where nothing has actually crashed — it can be difficult for an automated approach to see that,” Fraze said. “There are also behaviors that are acceptable in one application, that would represent enormous flaws in another application. It depends on the context, which is something humans intuitively understand and machines don’t.”

Despite the hurdles, researchers see promise in automating some of the routine work of cybersecurity. DARPA has no official program around AI in cyber right now, but with increasing acceptance of AI in the commercial world, researchers in DARPA and elsewhere continue to explore the possibilities.

No comments:

Post a Comment