Pages

14 March 2023

China’s New Legislation on Deepfakes: Should the Rest of Asia Follow Suit?

Asha Hemrajani

On January 10, after a year-long public comment period, the Cyberspace Administration of China (CAC) rolled out new legislation to regulate providers of deepfake content. While certain Western state and national governments have already introduced some legislation in this space, the new Chinese legislation is far more comprehensive and is described as a mechanism to preserve social stability. It specifically prohibits the production of deepfakes without user consent and requires specific identification that the content had been generated using artificial intelligence (AI).

Will this legislation be effective, and should other Asian countries follow suit?

What Are Deepfakes?

A deepfake is a piece of modified content created using deep learning, a form of AI called Generative AI. There are a variety of deepfake techniques, but the most commonly seen example is the deepfake video in which the face of a person in the video is swapped with another person’s face. These videos are made with AI algorithms called encoders to make realistic looking but fake content.

Several factors have fostered the development and uptake of deepfakes globally. The necessary technology – specifically, the AI algorithms and models, datasets, and computing power needed to create deepfakes – is readily available today. In addition, it is relatively simple to create deepfake videos using easily available apps and platforms that generate deepfakes as a service. A quick survey of the commercially available apps (free and paid) shows multiple ways to combine images/videos of known personalities and content from the deepfake creator to create media that, at first glance, appear genuine. There are freelance experts available to help create even better quality deepfakes to prevent detection. The advent of 5G networks, which support larger bandwidths, has further enabled the ease of generation and dissemination of deepfakes.

Deepfake technology will only improve in quality and accessibility. This improvement is especially worrisome because deepfakes can be used for multiple nefarious or outright criminal purposes, such as social engineering, automated disinformation attacks, identity theft, financial fraud, scams and hoaxes, fake celebrity pornography, and election manipulation.

Deepfake Legislation in the U.S. and Europe

Several U.S. states have passed legislation banning deepfakes, but only in specific relation to the dissemination of deepfakes about political elections or pornography. Regulations concerning deepfakes in other sectors are not covered – for example, the production of deepfakes used for social engineering purposes is not currently banned.

The legislation passed in the United States also only applies to certain states, not nationwide across the country. If the creator of the deepfake content is outside the relevant state’s jurisdiction, the legislation is inapplicable, leaving victims of pornography deepfakes who could have been humiliated or manipulated with no opportunity for recourse.

In the United States, there have been voices of dissent, such as the Electronic Frontier Foundation (EFF), which is concerned about the implications of stricter deepfake legislation on the freedom of speech and expression. They argue that current legislation that covers libel, fraud, and fake news should be sufficient to cover abuses committed with the help of deepfakes.

Over in the European Union, the European Commission has proposed the AI Act, which includes deepfakes under its scope in an initial attempt to regulate them. The act has yet to be passed as, among other reasons, there is some apprehension about the suggested strategy. The apprehension revolves around the difficulties in defining deepfakes and what part should be regulated. The AI Act is currently being discussed by the European Parliament and Council of Europe.

In fact, legislation covering deepfakes is far from being widely adopted as many countries are still grappling with how best to regulate deepfakes, if at all.

Regulation of Deep Synthesis Technology in China

China’s new regulations, called Deep Synthesis Provisions, govern deep synthesis (or deepfake) technology and services, including text, images, audio, and video produced using AI-based models. These new regulations are hardly surprising given China’s long history of attempting to retain strict control over the internet.

The CAC describes the regulations as necessary because deep synthesis technology “has been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others’ reputation and honor, and to counterfeit others’ identities.” The CAC goes on to explain that “committing fraud, etc., affects the order of communication and social order, damages the legitimate rights and interests of the people, and endangers national security and social stability.”

Two categories of entities need to abide by the provisions: the platform providers that provide content generation services and end-users who use such services. Under these new Chinese regulations, any content that was created using an AI system must be clearly labeled with a watermark i.e., text or image visually superimposed on the video indicating that the content had been edited. Content generation service providers must undertake not to process personal information and comply with other rules such as the evaluation and verification of AI algorithms deployed, authentication of users (to enable verification of the creators of the videos), and setting up feedback mechanisms for content consumers.

Should Other Countries in Asia Consider Similar Regulations?

Is current legislation in Asian countries sufficient to tackle the potential threats from AI-created fake content? Singapore, for instance, already has the Protection from Online Falsehoods and Manipulation Act (POFMA), whose primary function is to counter false statements of fact communicated in Singapore via the internet. This legislation would deter the use of deepfake videos in election manipulation, for instance. Several Asian countries already have personal data protection legislation, such as Singapore’s Personal Data Protection Act (PDPA), which governs the collection, use, and disclosure of personal data in order to prevent its misuse.

There are also criminal laws against the impersonation of public servants and impersonation by cheating. These laws can help to deter the spread of deepfakes by criminals intending to commit fraud, for instance. Similarly, China’s Cybersecurity Law and Detailed Implementation Rules for the Counter-espionage Law have provisions to combat the publication or dissemination of deepfakes.

These pieces of legislation cannot, however, counter the production of manipulated content. These laws only kick in when the deepfake is distributed on the internet.

Despite the misgivings about deepfake legislation being potentially misused to hamper freedom of speech, there seems to be a genuine need to be able to identify and label deep learning-generated content as such. This would ensure that consumers of such content are made aware that the content has been artificially created and not fooled into believing it is real. This would allow for the safe use of deepfakes in comedy and satire, for instance, while hopefully avoiding fraud, rumor-mongering, and disinformation.

In light of this, what are some possible alternatives and options?

The most obvious but harshest way would be to ban deepfakes entirely. However, that would neither be feasible or wise. Countries like Singapore have porous data borders, so data such as deepfakes could flow in freely from abroad, making such a ban difficult to enforce. In addition, there are multiple use cases globally (including China) where deepfakes can be put to positive use (such as helping the physically challenged), and banning deepfakes would hamper numerous opportunities.

Another way is the deployment of deep fake detectors. Intel, for instance, has created a detector (FakeCatcher) that they claim can analyze videos to detect if they are genuine or fake.

As social media platforms are the most common way to spread deepfake content, the responsibility could be shifted to social media platforms to deploy detectors like Intel’s to identify deepfakes and mark them as such. However, the challenge is that individual social media platforms are likely to use their own standards for detecting, naming, and categorizing deepfakes, which could result in inconsistency and confusion.

In addition, researchers have shown that these detectors can sometimes be deceived. One study illustrated how detectors could be defeated by inserting adversarial inputs into every video frame. These adversarial examples “are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake.”

The shortcomings of all the above approaches are evident. Hence, in order to tackle the possible flooding of deepfake videos in society with potentially disastrous consequences, a broad-based yet cohesive approach will need to be developed. It would be useful to monitor the actual implementation and effectiveness of this newest legislation in China and the developments around the proposed EU AI Act. A more collaborative deepfake monitoring framework among various social media platforms and government agencies could be fostered. Irrespective of whatever legislation is enacted, developing public awareness of the dangers of deepfakes should begin soon.

Given that AI technologies will continue to evolve at a lightning pace, policies will always be playing catch-up. Asian countries could consider starting with a scalable policy encompassing both social and technology-based solutions, with a view to evolving to additional legislation that transcends the technology and focuses on the big picture view of AI generated content, both positive and negative.

No comments:

Post a Comment