Elina Treyger, Joseph Matveyenko, Lynsay Ayer
Reports of artificial intelligence–induced psychosis (AIP) suggest that large language models (LLMs) and future artificial general intelligence (AGI) systems might be capable of inducing or amplifying delusions or psychotic episodes in human users. To date, AIP has been discussed primarily as a public or mental health concern.
In this report, the authors examine the scope of this phenomenon and whether and how LLMs—and, eventually, AGI—could create significant national security threats. Can this capability be weaponized to induce psychosis at scale or in target groups? What kind of damage might that cause? The authors assess which targets might be most vulnerable, the potential scope of harm, and how adversaries might exploit this capability against key individuals, groups, or populations.
No comments:
Post a Comment