Pages

18 March 2023

Deepfakes and Disinformation Pose a Growing Threat in Asia

Dymples Leong

Recent news on the use of computer-generated avatars by a pro-China influence campaign has once again aimed a spotlight at the usage of deepfakes. The influence campaign was first observed by intelligence company Graphika in late 2022. Videos of lifelike AI avatars, portraying news anchors on the fictional Wolf News outlet, promoted the interests of the Chinese Communist Party. The videos commented on topics such as gun violence in the United States and the importance of China-U.S. cooperation for the global economy’s recovery from COVID-19.

Previous investigations by Graphika and other researchers found the use of AI-generated fake faces and edited out-of-context videos aimed to mislead by fabricating an individual’s words or actions. What distinguished this from other previous AI-generated images or media was the involvement of a state-aligned information campaign to generate fake personas using deepfake technologies.

This case brings concerns about deepfakes to the forefront of public discussions, and raises serious questions: What is the impact of deepfakes and disinformation, and what is the significance of deepening commercialization in deepfake technology?

Deepfakes are generated using a specific form of artificial intelligence known as deep learning. Deep learning algorithms can swap faces of a person’s likeness in a video or an image with others. The creation of deepfakes rose to public prominence via face-swapping photos of celebrities online, with deepfake videos of Tom Cruise on TikTok in 2021 captivated the public through the hyper realistic persona of the actor.

Deepfakes are now ubiquitous on social media, with AI features and applications enabling users to generate avatars of themselves, or to create an entirely new persona. The addition of Generative Adversarial Networks (GANs) to the mix enables the creation of novel unique faces not found in the algorithm’s learning data, and not based on existing persons. The popularity of such technologies has led to a boom in the development of apps and features offering AI avatars. Advances in AI technology have made it harder to distinguish between real and deepfake images, further blurring the grey line between fact and fiction.

This prevalence has led to concerns about privacy erosion and the potential for abuse and exploitation. Generative AI technology has been used for the creation of deepfake pornography, which accounted for a whopping 96 percent of all deepfake videos online in 2019. Ordinary individuals have also been targeted by fake pornography campaigns. Services offering to alter images of women into nude photos have also risen. One such instance involved the generative AI app Lensa, which was criticized for allowing its system to create fully nude images from profile headshots of users. Any woman can be a victim of this synthetic stripping and have their “nude” images shared on multiple platforms online.

The use of deepfake technology in influence operations and disinformation campaigns is not new. Multiple instances of coordinated inauthentic behavior involving the use of deepfakes have been identified. In 2019, Facebook removed a coordinated inauthentic network of 900 pages, with the accounts managed mainly out of Vietnam. The network was linked to the Epoch Media Group, a far-right U.S.-based media grouping known to engage in misinformation tactics. Most accounts identified utilized fake profile photos generated by AI to create inauthentic accounts on Facebook, masquerading as Americans on Facebook groups. Fake profile photos of journalists and consultants – generated by AI technology – were used. These hyper realistic deepfakes were combined with false journalist identities to write articles for conservative publications online.

The potential impacts of deepfakes has raised alarm bells. The Federal Bureau of Investigation (FBI) warned in its March 2021 threat assessment report that the usage of synthetic content for cyber and foreign influence operations could be leveraged by malicious actors. The assessment report further stated that influence operations utilizing AI-generated synthetic profile images were of specific concern, potentially allowing operators to mask their identities behind deepfake-generated personas to spread disinformation online.

Deepfakes in Asia have already been used for political purposes. A notable instance of a deepfake for a political campaign was during the 2020 Legislative Assembly elections in India. Manipulated videos of Bharatiya Janata Party (BJP) President Manoj Tiwari were distributed across 5,700 WhatsApp groups in Delhi and the surrounding areas, reaching around 15 million people in India. AI-generated faces have been used in coordinated influence campaigns allegedly originating from Asia. One instance involved the deployment of a cluster of 14 inauthentic Twitter accounts utilizing AI-generated faces as profile pictures promoting Chinese 5G capabilities in Belgium. The usage of AI-generated faces removes the need for using pilfered profile pictures to disguise fake accounts, avoiding risk of detection by traditional investigative techniques.

In the past, pro-China influence campaigns had limited success. In the Wolf News example, the AI-generated avatars were of low quality, with the narration delivered by the anchors in a jerky manner. The English scripts were also filled with grammatical errors. The reach of the campaign was also minimal, as the campaign utilized fake persona accounts to amplify their reach online. Although the impact of the campaign was limited, such campaigns raise the issue of the increasing commercialization of deepfake services.

The rise of deepfakes has seen an increase in the number of companies generating quality-level advanced deepfakes. The services of such companies are predominantly used for entertainment and training purposes, for instance, creating customer and human resource videos. But the rise of commercial services to generate deepfake videos provides easily available resources for influence operations actors, be they state or non-state backed. This raises concerns about the moderation of deepfake technology services.

The commercial application of synthetic media will continue to be a growing trend due to the popularity of such applications on social media and the internet. The generation of high-quality deep fakes made for disinformation or political purposes could occur more frequently in the future. Such commercialized deepfakes could become key tools in the disinformation toolbox for propagandists worldwide, as countries with advanced AI capabilities and access to large data troves can gain advantages in information warfare.

There could also be an increase in companies specializing in deepfake generation for hire, which could provide clients (state-backed or not) wishing to mount disinformation campaigns with a whole suite of tools. Deepfakes could be layered into propaganda campaigns to improve the effectiveness. This could potentially result in a cottage industry of deepfake generation as a service.

The merger of GPT – a deep learning neural network language model trained on large amounts of text – with deepfakes raises further concerns as technological advances run further ahead of regulation. By combining multiple techniques, such tools can amplify the work of malign actors to produce even more convincing media artifacts for hostile intent, further destabilizing societal trust and credibility.

When asked about the pro-China videos created by the services offered by his company, the CEO of Synthesia emphasized the need for regulators to set more definitive rules on how AI tools could be used. The increase in commercialization of synthetic media apps and services, and a lack of clarity by regulators on the usage of AI tools enables disinformation to proliferate online. Regulators have attempted to set parameters and rules under which deepfakes should be regulated. The EU’s Artificial Intelligence Act – dubbed as the world’s first AI law, which is likely to be passed in 2023 – requires creators to disclose deepfake content.

In Asia, China prohibits usage of deepfakes, which are deemed harmful to national security and society. New Chinese regulations require deepfakes to be clearly labelled as synthetically generated or edited, and the identity of individuals in the deepfake must be made known to the company. Similar attempts at regulation are also underway in places in the United States, United Kingdom, South Korea, and Taiwan.

A strong focus on public education could also help. People tend to overestimate their ability to distinguish between authentic and inauthentic visual content, and are usually weak in guarding against visual deception. Existing media literacy and fact-checking initiatives can continue to play a role in raising public awareness against shallowfakes and deepfakes while raising the importance of visual literacy.

The public must be trained not only on media literacy skills, but on a comprehensive information literacy skillset, including and emphasizing visual literacy. That can create and enhance public awareness and understanding of deepfakes. Public engagement can help inoculate people to the potential harms of the emerging advancements of deepfake AI technology.

No comments:

Post a Comment