21 April 2024

Generative Artificial Intelligence Threats to Information Integrity and Potential Policy Responses

TODD C. HELMUS, BILVA CHANDRA

The advent of generative artificial intelligence (AI) and the growth of broader AI capabilities have resulted in many questions about the technology’s impact on the body politic and the role of information and trust in a functioning democracy. Even before the 2022 generative AI breakthrough when OpenAI released ChatGPT and DALL-E, the United States and other nations suffered from what RAND researchers have referred to as truth decay, or the decline of the role of facts and analysis in public life.1 However, generative AI opens up even newer avenues of both malicious use and unwitting misuse of information, which can lead to sweeping implications for election cycles, the empowerment of dangerous nonstate actors, the spread of both misinformation and disinformation, and the potential to undermine electoral processes.

The purpose of this paper is to provide policymakers and scholars with a brief and high-level review of potential threats that generative AI might pose to a trust worthy information ecosystem.3 We offer a summary of policy initiatives that could mitigate these threats. In this paper, we avoid offering specific recommendations for policy initiatives, but we try to summarize the strengths and limitations of major proposals. We conducted this review by examining a variety of published resources on generative AI threats to the information space and papers that highlight potential policy options.4 For a more comprehensive take on policy initiatives, we encourage readers to seek additional sources.

No comments: