Matthew Allen
The research team, which has been linked to the University of Zurich, covertly tested the ability of artificial intelligence (AI) to manipulate public opinion with misinformation on a subreddit group.
For several months, the researchers stretched the ethical boundaries of observing social behaviour beyond breaking point. They used Large Language Models (LLMs) to invent opinions on a variety of subjects – from owning dangerous dogs to rising housing costs, the Middle East and diversity initiatives.
The AI bots hid behind fictitious pseudonyms as they churned out debating points into the subreddit r/changemyview. Members of the group then argued for or against the AI-composed opinions, unaware they were part of a research project until the researchers came clean at its completion.
The revelation provoked a storm of criticism within Reddit, the research community and the international media.
At first, the researchers, who will not reveal their identities for fear of reprisals, defended their actions, because the “high societal importance of this topic” made it “crucial to conduct a study of this kind, even if it meant disobeying the rules” of the channel, which included a ban on AI bots.
They later issued a “full and deeply felt apology” as “the reactions of the community of disappointment and frustration have made us regret the discomfort that the study may have caused.”
‘Bad science is bad ethics’
No comments:
Post a Comment