Pages

8 May 2016

Will artificial intelligence revolutionize cybersecurity?

MAY 4, 2016

With criminal hackers becoming more effective at breaking into computer systems, cybersecurity researchers, government agencies, and academics are looking to artificial intelligence to detect – and fight – cyberattacks.

Most people probably have no idea they encounter artificial intelligence technology at nearly every turn on the Internet. It's how retailers track shoppers' behavior and show them ads that attempt to match their tastes in clothing or electronics. 

While that's a relatively simply use of artificial intelligence, often known as just AI, researchers, entrepreneurs, and US government officials are investing heavily into moving much more advanced AI into health care for such pursuits as drug research, automotive technology like self-driving cars, and even for teaching computers how to track and defend themselves against hackers. 

In fact, within the past year, security startups, leading academics, government agencies, and some of the largest digital security firms in the country have invested heavily in AI technology for cybersecurity, believing that recent advancements in processing power could allow computers to outperform humans when it comes to many aspects of defending networks.

"Just imagine a world in which bots are out there looking for vulnerabilities and other bots or artificial intelligence is simultaneously poking holes, plugging holes, poking back," said Ryan Calo, a law professor and director of the Tech Policy Lab at the University of Washington, a think tank that examines cybersecurity and AI policy.

Those kinds of systems are already beginning to enter the marketplace. Last year, big data startup Splunk partnered with consulting firm Booz Allen Hamilton to offer artificial intelligence-powered services to help deter attacks. The cybersecurity firm Kaspersky Lab has patented technology aimed at eliminating false positives for machine learning algorithms.

This week, the White House announced it will host a series of summertime workshops to further explore the benefits of AI in the government and the private sector.

"AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery – adding to the challenge of predicting and controlling how complex technologies will behave," said Ed Felten, deputy US chief technology officer, in a statement announcing the initiative. 

Get Monitor cybersecurity news and analysis delivered straight to your inbox.

Additionally, the Defense Advanced Research Projects Agency (DARPA), the Pentagon's research wing, recently announced plans to develop a program to use AI to uncover culprits – whether criminal gangs or nation-state hackers – behind cyberattacks. 

That's the kind of technology that can provide a leg up to security teams attempting to find attacks in reams of network traffic every day, said Steve MacLellan, chief executive officer of Blue Sky Management and Research, a firm that invests in cybersecurity startups.

"Humans are overwhelmed by data,” said Mr. MacLellan. "The promise of AI says, if I can teach the machine to dynamically adapt. If I’m getting these hundreds of different signals coming in, the machine learning part says ‘Hey, this one is more important than that one.'"

Indeed, the amount of data that cybersecurity professionals and researchers contend with can be overwhelming, and the amount of information on cyberattacks and malware is growing expeditiously every day. 

"As a rule of thumb, AI benefits tremendously the more data that you have," said David Brumley, a computer science professor at Carnegie Mellon University and the cofounder of the cybersecurity startup ForAllSecure.

"We’re really in this nice time period where the amount of data we have and the sophistication of our algorithms give us much more accurate answers," he said.

Similarly, a startup that spun out of the Massachusetts Institute of Technology called PatternEx wants to harness the power of machines to fight off hackers. Its AI2 platform – unveiled in a paper last month – aims to combine big data technology with advanced cybersecurity analysts in hopes of better understanding how to stop cyberattacks. 

Like other systems, AI2 culls networks for suspicious activity using a machine-learning algorithm that’s not supervised by humans. But since automated systems can only detect abnormalities – not attacks – Mr. Veeramacheni designed the program so it doesn't generate an alert every time it spots something unusual, which can cause headaches for security teams that run routine penetration tests.

Instead, AI2 only spits out 100 to 200 threats each day, giving human analysts the ability to label attacks by type, IP address, and similarity with old strains of malware, training the machine to get smarter to hackers.

PatternEx has already tested out the program using data from an unnamed e-commerce site, and plans to roll out the technology to a handful of Fortune 500 companies later this year.

But other companies drawing upon big data and AI to bolster cybersecurity aren't ready to cut the human out of the process entirely.

"The MIT system [AI2 ] is starting out with an unsupervised learning system," said Chris McCubbin, director of data science at Sqrrl, a cybersecurity startup. “There’s a lot of things that are unusual that the system’s not going to know about."

Still, many AI researchers and backers say that AI systems will eventually become smart enough to know the difference between an innocuous computer glitch and a malicious attack.

"As technology grows, you'll have smart houses, you'll have the Internet of things, you'll have all of these things are generating sensor data," said Blue Sky's MacLellan. "You need a platform that can consume that data."

No comments:

Post a Comment