Pages

13 May 2023

Human Error Drives Most Cyber Incidents. Could AI Help?

Tomas Chamorro-Premuzic

Summary. Although sophisticated hackers and AI-fueled cyberattacks tend to hijack the headlines, one thing is clear: The biggest cybersecurity threat is human error, accounting for over 80% of incidents. This is despite the exponential increase in organizational cyber...more

The impact of cybercrime is expected to reach $10 trillion this year, surpassing the GDP of all countries in the world except the U.S. and China. Furthermore, the figure is estimated to increase to nearly $24 trillion in the next four years.

Although sophisticated hackers and AI-fueled cyberattacks tend to hijack the headlines, one thing is clear: The biggest threat is human error, accounting for over 80% of incidents. This, despite the exponential increase in organizational cyber training over the past decade, and heightened awareness and risk mitigation across businesses and industries.

Could AI come to the rescue? That is, might artificial intelligence be the tool that helps businesses keep human negligence in check? And if so, what are the pros and cons of relying on machine intelligence to de-risk human behavior?

Unsurprisingly, there is currently a great deal of interest in AI-driven cybersecurity, with estimates suggesting that the market for AI-cybersecurity tools will grow from just $4 billion in 2017 to nearly $35 billion net worth by 2025. These tools typically include the use of machine learning, deep learning, and natural language processing to reduce malicious activities and detect cyber-anomalies, fraud, or intrusions. Most of these tools focus on exposing pattern changes in data ecosystems, such as enterprise cloud, platform, and data warehouse assets, with a level of sensitivity and granularity that typically escapes human observers.

For example, supervised machine-learning algorithms can classify malignant email attacks with 98% accuracy, spotting “look-alike” features based on human classification or encoding, while deep learning recognition of network intrusion has achieved 99.9% accuracy. As for natural language processing, it has shown high levels of reliability and accuracy in detecting phishing activity and malware through keyword extraction in email domains and messages where human intuition generally fails.

As scholars have noted, though, relying on AI to protect businesses from cyberattacks is a “double-edged sword.” Most notably, research shows that simply injecting 8% of “poisonous” or erroneous training data can decrease AI’s accuracy by a whopping 75%, which is not dissimilar to how users corrupt conversational user interfaces or large language models by injecting sexist preferences or racist language into the training data. As ChatGPT often says, “as a language model, I’m only as accurate as the information I get,” which creates a perennial cat-and-mouse game in which AI must unlearn as fast and frequently as it learns. In fact, AI’s reliability and accuracy to prevent past attacks is often a weak predictor of future attacks.

Furthermore, trust in AI tends to result in people delegating undesirable tasks to AI without understanding or supervision, particularly when the AI is not explainable (which, paradoxically, often coexists with the highest level of accuracy). Over-trust in AI is well-documented, particularly when people are under time pressure, and often leads to a diffusion of responsibility in humans, which increases their careless and reckless behavior. As a result, instead of improving the much-needed collaboration between human and machine intelligence, the unintended consequence is that the latter ends up diluting the former.

As I argue in my latest book, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, there appears to be a general tendency where advances in AI are welcomed as an excuse for our own intellectual stagnation. Cybersecurity is no exception, in the sense that we are happy to welcome advances in technology to protect us from our own careless or reckless behavior, and be “off the hook,” since we can transfer the blame from human to AI error. To be sure, this is not a happy outcome for businesses, so the need to educate, alert, train, and manage human behavior remains as important as ever, if not more so.

Importantly, organizations must continue their efforts to increase employee awareness of the constantly changing landscape of risks, which will only grow in complexity and uncertainty due to the growing adoption and penetration of AI, both on the attacking and defensive end. While it may never be possible to completely extinguish risks or eliminate threats, the most important aspect of trust is not whether we trust AI or humans, but whether we trust one business, brand, or platform over another. This calls not for an either-or choice between relying on human or artificial intelligence to keep businesses safe from attacks, but for a culture that manages to leverage both technological innovations and human expertise in the hopes of being less vulnerable than others.

Ultimately, this is a matter of leadership: having not just the right technical expertise or competence, but also the right safety profile at the top of the organization, and particularly on boards. As studies have shown for decades, organizations led by conscientious, risk-aware, and ethical leaders are significantly more likely to provide a safety culture and climate to their employees, in which risks will still be possible, but less probable. To be sure, such companies can be expected to leverage AI to keep their organizations safe, but it is their ability to also educate workers and improve human habits that will make them less vulnerable to attacks and negligence. As Samuel Johnson rightly noted, long before cybersecurity became a concern, “the chains of habit are too weak to be felt until they are too strong to be broken.”

No comments:

Post a Comment