Pages

7 December 2021

Will Artificial Intelligence Help or Hurt Cyber Defense?

Dan Lohrmann

“The U.S. is struggling with a labor shortage that is hobbling its economic recovery, but companies are not sitting still as they work to keep production up and running. As these job vacancies increase, they are turning to automation to pick up any slack.

“Orders for new robots have reached an all-time high in 2021.”

“From fast food to farming, Covid-19 has accelerated the rise of the worker robots. This in turn will put more jobs at risk and makes the need to reframe society ever more urgent.”

Nevertheless, the article from The Guardian also points out:

“There can be no doubt that the pandemic and the associated worker shortage are accelerating the drive toward deploying artificial intelligence, robotics and other forms of automation. In the UK, the trend is being further amplified as Brexit’s impact on the workforce becomes evident. However, the reality is that most of these technologies are unlikely to arrive in time to offer a solution to the immediate challenges faced by employers. …

“Over the course of a decade or more, however, the overall impact of artificial intelligence and robotics on the job market is likely to be significant and in some specific areas the technologies may lead to dramatic change within the next few years. And many workers will soon confront the reality that the encroachment of automation technology will not be limited to the often low-paying and less desirable occupations where worker shortages are currently concentrated. Indeed, many of the jobs that employers are struggling to fill may prove to be highly resistant to automation. At the same time, better-paying positions that workers definitely want to retain will be squarely in the sights as AI and robotics continue their relentless advance.”

Which brings us to the question of AI. The number of current (and future) jobs that can truly be filled by robots will depend on advances in AI — which many are putting under the umbrella of “automation.”

For example, one recent article offers “43 Jobs That’ll Soon Be Lost to Automation”: “Workers have long feared losing jobs to newcomers, but the threat has changed in the digital age, with automated technologies posing a new form of competition. With 2.3 million already present in the global workforce, robots are now projected to supplant 20 million manufacturing jobs by 2030, including 1.5 million in the United States. The shock of a pandemic is expected to accelerate this shift, as industries turn to technology to alleviate financial losses. The jobs that follow are poised to become increasingly automated, including order-taker positions at your local McDonald's.”

This hostile threat landscape has led organizations such as Microsoft to use AI as part of their internal and external cybersecurity strategy. “We’re seeing this incredible increase in the volume of attacks, from human-operated ransomware through all different kinds of zero-day attacks,” said Ann Johnson, corporate vice president of security, compliance and identity at Microsoft.

One of the most high-profile uses of AI this year occurred at the Olympic Games in Tokyo, when Darktrace AI identified a malicious Raspberry Pi Internet of Things (IoT) device that an intruder had planted into the office of a national sporting body directly involved in the Olympics. The solution detected the device port scanning nearby devices, blocked the connections, and supplied human analysts with insights into the scanning activity so they could investigate further.

“Darktrace was able to weed out that there was something new in the environment that was displaying interesting behavior,” Darktrace Global Chief Information Security Officer Mike Beck said. Beck noted there was a distinct change in behavior in terms of the communication profiles that exist inside that environment.

THE DARK SIDE OF AI

Back in May, I wrote this piece asking, “AI Is Everywhere — Should We Be Excited or Concerned?”

I covered plenty of good, bad and ugly examples of AI in that blog, and I also previewed a talk by Bruce Schneier that he gave at the 2021 RSA Conference. Schneier believes that, initially, AI analysis will favor hackers. “When AIs are able to discover vulnerabilities in computer code, it will be a boon to hackers everywhere,” he said.

Here is that full keynote presentation by Schneier:

“The large part of the problem, as both experts see it, is that attackers are using A.I. and automation on a less complex but still very effective scale that allows them to exploit flaws in security systems. …

“'The bad guys are crushing many of us in terms of automation,' he said. 'They're getting much, much better at using intelligent systems and A.I. to do reconnaissance, which allows them to narrow down targets very effectively. They're usually using AI to decompose software to figure out where vulnerabilities exist extraordinarily effectively.'

“When asked to offer advice at the conclusion of the event, Roese offered up a simple idea: ‘Don’t view A.I. in the security context as an added feature. You have to treat it as a core component of all things security, just like all things business process or all things application. Don’t compartmentalize it into a specialist team that, in isolation, deals with A.I. Develop and invest in the capability across the entire organization because it’s a tool, and if you don’t use it everywhere, you’re basically leaving something on the table.’”

The Council on Foreign Relations recently wrote about AI code generation and cybersecurity, stating that AI will revolutionize the way that we write computer programs. The U.S. government and industries need to invest in AI as a cybersecurity tool.

“With software becoming more secure and adept at defending against malware, the cyberattack threat environment has shifted towards phishing. But unlike in the past, where these attacks were predominantly email-driven, hackers are now focused on multiple channels such as mobile devices, apps, and web pages. Since phishing is a human problem that exploits emotions and deals with the psychology of fear and uncertainty, conventional computing methods are not sufficient to defend against them. One of the biggest problems? The browser.”

FINAL THOUGHTS

As I keep coming back to this topic of robots, AI, jobs, the future and cybersecurity, I ponder what current solutions will become problems. What are we creating now that we will later regret? It’s a very difficult topic to get your arms around, and one that I believe we need to keep re-examining.

“A large number of respondents argued that geopolitical and economic competition are the main drivers for AI developers, while moral concerns take a back seat. A share of these experts said creators of AI tools work in groups that have little or no incentive to design systems that address ethical concerns.

"Some respondents noted that, even if workable ethics requirements might be established, they could not be applied or governed because most AI design is proprietary, hidden and complex. How can harmful AI 'outcomes' be diagnosed and addressed if the basis for AI “decisions” cannot be discerned? Some of these experts also note that existing AI systems and databases are often used to build new AI applications. That means the biases and ethically troubling aspects of current systems are being designed into the new systems. They say diagnosing and unwinding the pre-existing problems may be difficult if not impossible to achieve.”

No comments:

Post a Comment