Sarah Cook
In early August, two professors from Vanderbilt University published an essay outlining a trove of Chinese documents linked to the private firm GoLaxy. The sources revealed a sophisticated and troubling use of artificial intelligence (AI) not only to generate misleading content for target audiences – such as in Hong Kong and Taiwan – but also to extract information about U.S. lawmakers, creating profiles that might be used for future espionage or influence campaigns. The article received significant coverage, rightfully so.
Yet, those findings represent only the tip of the iceberg in an emerging phenomenon. A series of reports, incidents, and takedowns over the summer – spanning OpenAI, Meta, and Graphika – shed further light on the latest uses of AI by China-linked actors focused on foreign propaganda and disinformation. Notably, generative AI tools are now employed not only for content production but also for operational purposes like data collection and drafting internal reports to the party-state apparatus. This evolution marks a new frontier in Beijing’s information warfare tactics, offering insights into what a more AI-dominated future could yield and why urgent attention is needed from social media platforms, software developers, and democratic governments.
A close review of these reports reveals five key dimensions:
1. Using AI for Content Generation
While prior China-linked disinformation campaigns had deployed AI-tools to generate false personas or deepfakes, these latest disclosures point to a more concerted effort to leverage these tools for creating entire fake news websites that distribute Beijing-aligned narratives simultaneously in multiple languages. Graphika’s “Falsos Amigos” report published last month identified a network of 11 fake websites, established between late December 2024 and March 2025, using AI-generated pictures as logos or cover images to enhance credibility.
No comments:
Post a Comment