Pages

13 March 2023

Terrorists Will Use Artificial Intelligence, Too

Sam Hunter & Joel Elson

As AI technology has exploded into public view, it has raised complex questions about the future of education, the employment landscape for arts and media, and even the nature of sentience. These are all important conversations. But, as terrorism researchers with a particular focus on new and emerging threats, we find ourselves asking a different, darker question:

How will extremists use AI to hurt people?

Stated bluntly, AI will allow malign actors to develop plans and ideas that were more challenging or even impossible prior to widespread access of such technology. In the coming years, we believe that understanding the scope of the threat and developing solutions will be critical. As we’ve explored in previous work on cognition and creativity, expertise is comprised of two components: knowledge (i.e., possessing information) and how that knowledge is organized. The internet has provided just about anyone with access to knowledge. AI, however, organizes that information in a useful way and provides a user-friendly output.

Witness, for example, schoolteachers requesting lesson plans complete with discussion questions, exercises, quizzes, and worksheets. Colloquially, we can think of AI as a pocket expert. On any topic. At any given moment. Indefinitely. Without fatigue. There are four key reasons why tools like these can be dangerous in the hands of terrorists.

Lowered Bar to Entry

With a pocket expert available to offer deep, organized information on any topic instantaneously, many of the typical barriers around malign acts are removed. An extremist group no longer needs, for example, a chemical engineer who must be recruited and incentivized to join their cause. Instead, information about dangerous compounds can be summarized and synthesized into accessible, digestible, bite-sized chunks perfect for an extremist with no prior knowledge of chemistry.

Reduced Cost

Experts are expensive. AI in all its various forms has taken significant resources to develop but is now cheap. Access for everyone means it must be affordable. It is and will continue to be. In many cases, use of AI is free, with tools packaged into existing software. In the case of ChatGPT, for a nominal fee, users with an entry level knowledge of programming can tap directly into the technology.

Diverse and Integrated Expertise

Human collaboration is challenging. When experts collaborate successfully, they must communicate, coordinate, and integrate, a process that isn’t always straightforward and usually involves stumbles along the way. The ability to develop and synthesize information from diverse sources on diverse topics is a key, and concerning feature, of this emerging technology. ChatGPT isn’t just a pocket expert. It’s a pocket of experts who can be taught to work well together.

Iteration and Simulation

Staring into an asteroid field, Han Solo famously said, “never tell me the odds.” Malign actors want the odds. AI can give them. A sometimes-unsung feature of AI capabilities is that of iteration and simulation. An AI can examine several scenarios and provide guidance on the path that has the greatest likelihood of success. The result is a set of new malevolent plans that are more likely to succeed in their destructive aims.

So, what can we do? A few things:

Regulate

The first path forward is an apolitical acknowledgement that regulation is a necessity. Although imperfect, ChatGPT as an example, does have safeguards in place to protect against asking “how do I hurt the most people?” Careful, thoughtful policy around building such safeguards is critical, particularly in the short-term, as there are currently workarounds for such protections.

Historically, there are examples of successfully reigning in new tools and technologies. Dynamite was initially unregulated and used to malicious ends before lawmakers carefully crafted policy limiting its use. Cryptocurrency is a more modern example, with the U.K. passing a law making it easier for law enforcement to seize cryptocurrency linked to terrorism. The U.S. is considering its own laws. Regulating AI will possibly be more difficult, as variants will likely retreat to the darker corners of the web. Yet raising the bar to entry can have a notable impact on the widespread use of AI for malign purposes.

Red-Team

The second avenue is red teaming (i.e., generating ideas from the perspective of an adversary) on a large, systematic, and scientific scale. Research indicates that malevolent creativity is largely driven by context. Simply put, many of us have the capacity to think in malevolently creative ways and will do so if the situation demands. The implication here is that we should be putting large teams to the task of thinking like malign actors and using the same tools, in the same ways, against those that seek to cause harm. It is possible – if the broader homeland security enterprise is willing to think like the adversary – to out-creative malign actors.

Reclaim

It is important to remember that the factors inspiring hate and targeted violence are the root problem – not the tools themselves. It will be incumbent upon those tasked with protecting the public from terrorism and targeted violence to lean into the use of AI technology in their own efforts. Malign actors will have no qualms about what it means to ask AI how to most effectively harm others. Our national security frontline needs to be as adept with this new technology as the bad guys they’re trying to stop.

Just as classroom teachers are exploring ChatGPT as a bulwark against plagiarism, the intelligence community and law enforcement need to expand their cyber education to this front. Our knowledge of AI will almost assuredly grow in the coming years, and new tools and technologies will emerge. Along with them, new issues will certainly present themselves. It is critical to be forward-thinking in our approach – embracing today’s technology to anticipate and mitigate tomorrow’s threats.

No comments:

Post a Comment