17 July 2025

AI is polluting truth in journalism. Here’s how to disrupt the misinformation feedback loop.

Susan D’Agostino 

As a journalist, I’ve spent years reporting on artificial intelligence. I’ve traveled to four continents to interview headline-making AI luminaries and unsung researchers doing vital work,

 as well as ethicists, engineers, and everyday people who have been helped or harmed by these systems. Along the way, I’ve been a journalism fellow focused on AI at Columbia, Oxford’s Reuters Institute, and the Mila-Quebec AI Institute.

And still, I find myself unsettled. Headlines about AI in journalism swing between clickbait panic and sober alarm. They can feel speculative, even sci-fi—but also urgent and intimate:

“Your phone buzzes with a news alert. But what if AI wrote it—and it’s not true?” an editor at The Guardian wrote.

“It looked like a reliable news site. It was an AI chop shop,” two reporters at the New York Times wrote.

“News sites are getting crushed by Google’s new AI tools,” two reporters at the Wall Street Journal wrote.

Misinformation is hardly a modern invention, but with AI as an amplifier, it now spreads faster, adapts smarter, and arguably hits harder than before. This surge comes as independent journalism—the traditional counterweight to falsehood—faces economic decline, shrinking newsrooms, and eroding public trust.

Every time a falsehood is shared in outrage or belief, it signals demand, and the information marketplace may respond with even more invented nonsense. On the supply side, bad actors who leverage misinformation to widen societal and political divides have emerged as “the most severe global risk” in the years ahead, according to a 2024 World Economic Forum Global Risks Report. You may or may not agree with that last statement. But most can agree: AI is disrupting the world’s information ecosystem.

No comments: