2 March 2024

WEF Is Waging War on Misinformation and Cyber Insecurity

Katrina Thompson

What is the greatest cyber risk in the world right now? Ransomware? Business Email Compromise? Maybe AI? Well, the last one is pretty close. According to the World Economic Forum, misinformation and disinformation are the most severe global risks of the next two years.

In their Global Risks Report 2024, the WEF posited that the post-pandemic world is at a "turning point," with the two key problems possessing the power to do everything, from challenging mental health to eroding human rights. Also of concern is the coalescing of AI power and technology, which is lowering the bar to entry for would-be attackers.

Misinformation: What You Don't Know Can Hurt You

"It ain't what you don't know that gets you into trouble," humorist Mark Twain is credited for saying. "It's what you know for sure that just ain't so." The proliferation of convincing generative AI deepfakes has made this statement more relevant than ever.

AI and the Option of Truth

The WEF highlights this trend: "No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called 'synthetic' content, from sophisticated voice cloning to counterfeit websites." The fact that what we see may no longer reflect reality has some stunning implications, per the report.

Rampant misinformation and disinformation are poised to cause:
  • A radical disruption in elections
  • Political mistrust leading to polarization
  • Repression of human rights as authorities seek to crack down on information abuse – and perhaps step too far
The WEF warns that with the wrong actors leveraging easy-to-use AI tools to create fake content, the truths that guide our societies may be obscured and used to create convincing propaganda to forward private agendas. "New classes of crimes will also proliferate, such as non-consensual deepfake pornography or stock market manipulation," it notes, and elections may be skewed by disinformation to the point where the democratic process itself is eventually eroded. It seems that AI-powered phishing attempts are only the start of AI's malicious potential.

What is being done?

Governments around the world are starting to turn an eye toward AI legislation and those that manufacture and illegally disseminate deepfakes online. The White House has released a Blueprint for an AI Bill of Rights, several states are leaning into AI legislation, and the EU released its flagship AI Act. But is it good enough?

The WEF argues that however encouraging these global gains, "the speed and effectiveness of regulation is unlikely to match the pace of [misinformation and disinformation] development."

Additionally, some of the measures already in place may be insufficient. The report states that research for deepfake detection technology is "radically underfunded" when compared to the scope of the problem and that even when synthetic content is labeled as such, those labels are ineffective, either appearing only as a warning or in digital fine-print that nobody sees. Encouragingly, China now requires AI-syndicated content to be watermarked.

Lastly, even when such warnings are heeded, the emotional impact of the falsified deepfake may be impactful enough to have already done its damage. "For example," the WEF posits, "an AI-generated campaign video could influence voters and fuel protests, or in more extreme scenarios, lead to violence or radicalization, even if it carries a warning by the platform on which it is shared that it is fabricated content." If we thought the Cambridge Analytical misinformation scandal rocked the world, we may only be half-braced for what is to come. "This is your disinformation campaign; this is your disinformation campaign on AI."

Data is king. Let's make a king.

In today's digital climate, data is power. When cybercriminals can't steal enough of it to make a difference, they manufacture it instead. The overwhelming power of Artificial Intelligence to produce information may quickly overwhelm the amount of data humans have produced so far, tilting the balance and making truth "up for grabs" by the loudest and most convincing voice.

As bystanders look to legislation to crack down on abuses, unwieldy power is inadvertently transferred to governments to act as the source of truth. So much hangs in the balance, and there are no clear answers. One thing is sure: we need to be wary, cross-check information, and be aware that behind everything we see, there could be something else. A critical-thinking mindset may be the best defense.

More Digital Threats on the Horizon

While the reliability of digital information received a lot of reporting real estate, there were two other security trends worth noting.

First, too much AI power might soon be in the hands of too few. Noted the WEF, "The extensive deployment of a small set of AI foundation models, including in finance and the public sector, or overreliance on a single cloud provider, could give rise to systemic cyber vulnerabilities, paralyzing critical infrastructure." It’s important not to place too many eggs in one basket, no matter how powerful that basket claims to be. The more a company impacts the market and the global economy, the more fail-safes there should be.

And the problem isn’t only that companies are creating a single point of failure; it’s the risk that we may be consolidating and monopolizing AI influence, exacerbating already present disinformation concerns and potentially perpetuating the cultural biases of those dominating the AI landscape in this case, the Global North.

Weathering the Cyber Landscape

When queried about the current risk landscape, stakeholders placed general cyberattacks as the fifth highest on the list, right behind extreme weather, misinformation, political polarization, and the cost-of-living crisis. Keep in mind respondents were not taken from a pool of security professionals or IT experts; they represent a cross-section of individuals across “academia, business, government, the international community and civil society.” As the world becomes increasingly (and almost ubiquitously) digital, cyber threats become top of mind for everyone. What was once the obscure profession of some now becomes the prescient concern of many.

While predictions seem dire, Saadia Zahidi, Managing Director at the World Economic Forum, states that there is still hope. “The future is not fixed,” she concludes, hoping that the 2024 Global Risks Report will serve as a “vital call to action for open and constructive dialogue among leaders" so that problems can be understood and mitigated before they reach the forecasts many are predicting today.

No comments: