19 April 2023

Picking the Rose, Leaving the Thorn: Why China’s AI Regulations Are Worth Careful Examination

Johanna Costigan

China’s artificial intelligence (AI) regulations reflect Xi Jinping’s attempt at authoritarian and patriarchal social control — but also offer meaningful protections against a precarious technology. As the U.S.-China “tech war” wages on, the United States should formulate its own AI laws. Otherwise, it risks ceding some regulatory leadership to the People’s Republic of China (PRC).

Executive Summary

The Chinese government in recent years has proven adept at formulating and implementing AI regulations — which the United States notably has not. AI regulations should be integrated into U.S. thinking about technology competition in the 21st century.

This paper examines two of China’s pioneering regulatory documents: one that addresses deepfakes (the Provisions on the Administration of Deep Synthesis Internet Information Services) and one that addresses algorithmic management (Provisions on the Management of Algorithmic Recommendations in Internet Information Services). They invoke an adherence to the “correct political direction” as devised by the Chinese Communist Party (CCP) leadership — but also outline prudent and enforceable AI regulations.

Among Xi Jinping’s many troublingly repressive tendencies is his attempt to achieve patriarchal control over the moral development of Chinese citizens — and, more specifically, China’s technology companies and the people who run them. China has many practical reasons to regulate AI. But one reason the government has implemented the precise language present in its regulations — centered on ensuring that AI enhances, not threatens, CCP rule — is Xi Jinping’s obsession with making sure no alternate center of power develops, in part by ensuring that a culture of misogyny thrives.

However, even though China’s laws are embedded in an authoritarian and patriarchal governance structure, many of the provisions, such as preventing the circulation of deepfakes, are not inherently authoritarian. Aspects of China’s regulations could therefore potentially be tested and applied in liberal democratic societies.

So far, there is little evidence that the U.S. government has a strong appetite for developing concrete AI regulations. Among U.S. firms, the rush to compete (the “AI arms race”) is likely to further stifle already stalled efforts to agree on AI ethics and standards.

The Chinese government, free from the burden of compromise, can more swiftly and secretly resolve its internal differences and regulate AI in ways that reflect Xi Jinping’s conception of right and wrong.

This paper argues that as the United States engages in an accelerating technology competition with China, it should comparably compete on effectively regulating AI.

No Longer Hypothetical: The Growing Global Stakes of AI Regulation

In a testament to the ongoing rapid and unregulated development of AI, a group of more than 1,000 businesspeople, researchers, writers, and others late last month released an open letter calling for a “pause” in work on AI models more advanced than GPT-4. Given the time it will take to determine and agree on AI development protocols, they may have been bringing a Band-Aid to open heart surgery.

More egregiously, per a statement in response drafted by Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell, the letter succumbed to “fearmongering and AI hype,” which strategically amplify the opaqueness of the technology to create the illusion that continued AI development is inevitable. Their statement places the debate in a currently unfolding reality rather than a strategically nebulous depiction of an AI-ruled future: “The current race towards ever larger ‘AI experiments’ is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.”

Regulating applications of AI will become a responsibility of all governments that do not want to forfeit the governance of AI to private interests. Current and future generations of AI can be used for a range of nefarious actions, like stealing and manipulating someone’s characteristics (such as voice and appearance) to create content that is at best deceptive and at worst hateful, pornographic, or any combination thereof. Generative AI, which employs neural network techniques to process and articulate information in ways meant to mirror the human brain, can be used by government actors, blackmailers, and exploiters of all ilks.

AI’s capacity for harm is already apparent from the documented racism embedded in facial recognition systems used by U.S. law enforcement and by the far more systematically applied use of the same technology in China to track and repress Uyghurs in Xinjiang, where “citizens and visitors alike must run a daily gantlet of police checkpoints, surveillance cameras and machines scanning their ID cards, faces, eyeballs and sometimes entire bodies.” Even if corporate interests had sufficient motivation to conduct full-scale ethical reviews, consistent enforcement might be impossible; tech executives themselves are uncertain about how — and how quickly — AI will develop. Governments around the world are struggling to define, let alone enforce, AI ethics.

The European Union (EU) has done the best job so far at drafting and implementing a comprehensive strategy that blends strong regulatory enforcement with values like freedom of speech and attempted freedom from surveillance. In 2018, the EU released the first version of its AI Strategy, which was designed to complement the General Data Protection Regulation (GDPR). It was then followed in 2021 by the Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence, which frames itself as consistent with the EU’s previous actions on AI. It purports to offer a “balanced and proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to [it].” The EU’s Parliament is currently discussing the most recent addition to this strategy, the AI Act. This legislation bans certain AI applications outright, deeming them “unacceptable,” while it labels other applications by risk level that come with corresponding regulatory requirements and punishments. Areas in which the use of AI is considered “high risk” include product safety, law enforcement, and migration. The law is backed with real penalties: companies can face non-compliances fines as high as €30 million.

The U.S. approach has been far more centered on investment than regulation. According to the Biden administration’s National Cybersecurity Strategy (NCS), released in March 2023, “public and private investments in cybersecurity have long trailed the threats and challenges we face.” Many of those threats are real, while others are phantoms emanating from U.S. insecurity that it might lose its technological edge to China. In a recent column, Ezra Klein summarizes the competitive mood among some in the AI industry: “To suggest we go slower, or even stop entirely, has come to seem childish.” This AI competition mindset is fueled both by a desire to beat other American companies and for the vague but values-laden suggestion that America must beat China. But in 2023, regulations on the use of AI — not just research and funding for it — would not be premature; they would be late.

The few U.S. lawmakers who publicly echo that sentiment are also highly aware of the systemic barriers they would face in achieving substantive legislation. In an op-ed, Representative Ted Lieu (D-CA) acknowledges that “it would be virtually impossible for Congress to pass individual laws to regulate each specific use of AI.” He adds that AI regulation merits “a dedicated agency” — even as he is working on enacting legislation to regulate law enforcement’s use of the technology. Lieu cites the creation of the Food and Drug Administration (FDA) as precedent for setting up this kind of specialized agency that would regulate a complicated area with obvious public safety implications.

But money is a simpler, easier, and more familiar fix than legislation or agency creation, making it a favored element in the NCS. That document asserts that we are preparing for “revolutionary changes in our technology landscape brought by artificial intelligence and quantum computing.” Its solution is “strategic public investments” that are not, presumably, predicated on any specific legal or ethical framework.

Even when U.S. lawmakers recognize that developing AI ethics is necessary, the framing is typically funneled through the safe narrative of competition and the comfortable solution of more unregulated investment. According to a press release the Congressional AI Caucus (of which Rep. Lieu is a member) issued last year, “If we want to compete on the world stage, we have to be competitive across the United States first.” Competition-induced investment is later directly linked to the ethical development of AI, albeit without a persuasive connection; according to the vice chairs of the caucus, “Investing in R&D and education is also a key way to ensure that AI develops in a safe, responsible, and ethical way that benefits society.” It is unclear how further, unspecified investment in research and development will produce an ethical framework for the technology’s development.

China, in contrast, has centered its push for development on regulating and manufacturing, controlling, and processing. As Dan Wang recently wrote, “too much of U.S. policy — including this legislation [the CHIPS Act and the Inflation Reduction Act] — is focused on pushing forward the scientific frontier rather than on building the process knowledge and industrial ecosystems needed to bring products to market.” Process knowledge refers to the experiential knowledge gained through direct practice. In China, assembly workers have it; regulators have it; distributors have it. The United States has come to rely more on “soft” skills in tech, like design, marketing, and investment. A technology like AI requires its own kind of strategic containment — a clear framework within which to develop.

AI Regulation under Xi Dada (习大大)

According to a 2015 article in a Communist Youth League’s paper, college students ranked “Papa Xi” the highest among a group of public figures, citing their admiration for the “aggression behind his anti-corruption campaign” (铁腕反腐的霸气) and his helpful advice that young people should not stay up late. Notably, he outranked Jack Ma, another public figure mentioned in the same article. Technology — which presents opportunities for tech CEOs to gain threatening prominence and individuals to find modes of expression and information the CCP has an interest in burying — presents an especially tricky development for a father figure like Xi.

Under the leadership of the CCP, regulators have taken concrete, if highly imperfect, steps to protect citizens from the harmful applications of generative AI — amid a global preoccupation with how we might limit, expand, or guide AI applications. These include large language models (LLMs)1 like ChatGPT, which attracted more than 100 million active users within two months of its launch. China’s regulations on deepfakes and algorithm recommendation services make the country an exemplar for harm-reducing laws — with a strong caveat: the same regulations fit seamlessly into the CCP’s moralistic and patriarchal policy agenda.

“Traditional” gender roles have received a “discursive boost” over the past decade of Xi Jinping’s rule. Two recent laws exemplify his aims to regulate AI and simultaneously empower himself and the patriarchal system he leads. The laws limit personal expression and information access in service of the Party-state’s interests — and institutionalize the management of individual and corporate use and misuse of AI with a degree of technical and administrative specificity. Precisely how Chinese authorities will apply laws is unknowable by design, but the letter of the law is worth consideration and, in this case, limited appreciation. The United States is still the world leader in nearly every area of technological development, save any checks or regulation on said technologies. The PRC’s protections against deepfakes and addictive algorithms are far from perfect, but until the United States crafts its own preferable versions of them, critiques of China’s AI regulations are easy to discount. They are worth a closer look.

“Deep Synthesis”

The Provisions on the Administration of Deep Synthesis Internet Information Services2 (translated by China Law Translate; original available here), which came into effect on January 10, 2023, represents the world’s first effort by a major regulatory agency to place a check on deep synthesis. What the law calls “deep synthesis technology” describes deepfakes, which use artificial intelligence capabilities to alter existing audio or visual content, but it also includes more general uses: the provisions define deep synthesis as “the use of technologies such as deep learning and virtual reality, that use generative sequencing algorithms to create text, images, audio, video, virtual scenes, or other information.”

The platforms and companies subject to the provisions are referred to as “deep synthesis service providers.” Article 1 states the function of the provisions is not only to strengthen the management of these technologies but also to “carry forward the Core Socialist values” (discussed in depth in a previous Asia Society Policy Institute paper), which are Xi-era social, moral, and legal guidelines for Chinese citizens. It later states the provisions serve to “preserve national security and the societal public interest, and protect the lawful rights and interests of citizens, legal persons, and other organizations.” Of these reasons, the invocation of the core socialist values is the most incendiary to Americans because the provisions serve to further the state’s control over individual and collective morality. The other stated concerns the provisions seek to address (especially national security), however, would not seem out of place in U.S. legislation.

The deep synthesis provisions are supposed to “comply with laws and regulations” but also “respect social mores and ethics and adhere to the current political direction.” They stipulate that deep synthesis services must not “harm the image of the nation,” but they also require that providers of deepfake technologies “shall implement primary responsibility for information security, establishing and completing management systems such as for user registration, review of algorithm mechanisms, [and] scientific ethics reviews.” These requirements place the onus of user safety squarely on the companies. Regulators can then decide which companies have violated these statutes and when, rendering a legal pathway for selective enforcement.

The law carries repeated mentions of identifying “illegal and negative” content. Illegal content should be flagged by the relevant regulators. Determining what qualifies as “negative” content, on the other hand, relies on a value judgment that can only logically be made by the officials placed along the hierarchy of the Party-state system.

The deep synthesis provisions also directly address a CCP priority that categorically seeks to protect its legitimacy rather than the public interest: dispelling what authorities determine to be “rumors.” Established in 2018, the China Internet Joint Rumor Refutation Platform (中国互联网辟谣平台) falls under the Cyberspace Administration of China’s (CAC’s) Illegal and Bad Information Reporting Center and is administered by Xinhua.net. Article 11 of the deep synthesis provisions echoes this goal, formalized years before the deepfake regulations were on the table, evidencing the law’s partial motivation as a mechanism of further speech control: “Providers shall establish and complete mechanisms for dispelling rumors, and where it is discovered that deep synthesis information services were used to produce, reproduce, publish, or transmit false information, they shall promptly employ measures to dispel the rumors, store related records, and make a report to the internet information departments.” Removing “rumors,” which are deemed by the Party to include tellings of past events, historical and recent, that fail to comport with the government's official versions, is conveniently embedded in the law alongside other legitimate protections.

These indisputably paternalistic inputs are contrasted by requirements that deep synthesis service providers “set up convenient portals for user appeals, public complaints, and reports.” While the outcomes of such complaints are impossible to anticipate, settings like this could be useful sites for public concerns. The regulations also take the step of mandating individual notification in cases where one’s likeness has been used: “Where deep synthesis service providers and technical supports provide functions for editing biometric information such as faces and voices, they shall prompt the users of the deep synthesis service to notify the individuals whose personal information is being edited and obtain their independent consent in accordance with law.” Nearly everyone would likely appreciate being notified when their likeness has been captured and manipulated. In addition, “conspicuous labels” that deep synthesis has been generated are required for services spanning “smart” dialogue, writing, speech and face generation, and manipulation of faces and gestures.

“Algorithmic Recommendations”

The Provisions on the Management of Algorithmic Recommendations in Internet Information Services3 (translated by China Law Translate; original available here), which came into effect on March 1, 2022, regulates “the use of algorithm technologies such as generation and synthesis, individualized pushing, sequence refinement, search filtering, [and] schedule decision-making.”

Platforms like Douyin (the Chinese version of TikTok) rely on their algorithmic recommendation functions to attract and, more dangerously, addict users. In China, regulators’ attempts to rein them in are at times directly paternalistic in their focus on children’s upbringing. Article 18, which addresses minors’ interactions with algorithms, has a promising start: “Algorithmic recommendation services must not push information to minors that might impact minors' physical and psychological health.” However, save the effort to prevent minors’ “addiction” to the internet, the examples it invokes betray regulators’ loyalty to the CCP’s monopoly on morality: “such as possibly leading them to imitate unsafe behaviors and conduct contrary to social mores or inducing negative habits.”

The provisions also work directly with the CCP’s attempts at social control: “Algorithmic recommendation service providers shall persist in being oriented towards mainstream values, optimize mechanisms for algorithmic recommendation services, [and] actively transmit positive energy.” The term “positive energy” refers to one of Xi’s favorite euphemisms for censorship; its profile was raised in 2013 when he used it in a speech at the Central Forum on Arts and Literature. Service providers should “endanger” neither “national security” nor the “societal public interest” (presumably, as devised by the CCP). Algorithmic recommendation providers should not “disrupt economic and social order,” which is a similarly opaque term that gives authorities de facto carte blanche — a system that exhibits an adherence to rule by, not of, law.

In the project to “regulate internet information services’ algorithmic recommendation activities,” companies are again told to “carry forward the Core Socialist Values.” It identifies departments (such as telecommunications, public security, and market regulation) that are to warn entities of their violations; if violations are severe and/or unrectified, it orders that relevant departments oversee suspending services or doling out fines, establishing accountability structures.

Article 14 bars providers from using algorithms to register fake accounts or “manipulate” other users. This introduces an awkward hypocrisy given the Chinese government’s own use of bots to “flood” platforms such that information about subjects the authorities would rather squelch, such as protests, is swiftly buried. The regulations place some clear and useful checks on tech companies that use algorithmic recommendation services, but they do nothing to curtail the Party-state’s repressive incentives.

Xi Jinping’s Techno-Patriarchy

Xi Jinping’s common prosperity campaign took a distressingly discriminatory turn when Chinese regulators in September 2021 banned “sissy men” from appearing on television. The campaign, touted as part of the effort to assuage income inequality and prevent the “disorderly expansion of capital” (announced after the 2020 Central Economic Work Conference) gained additional emphasis on patriarchal cultural engineering. Later that year, regulations were placed on online fan groups, based on the calculation that fan communities encouraged “extravagant pleasure” and threatened China’s “mainstream values.” Here, “mainstream” can be understood as code for “manly.”

These measures were taken alongside more targeted industry regulations (also known as the tech “crackdown or rectification”) that sought to kneecap rich individual tycoons, whose lifestyles were far more convincingly extravagant than those of members, for example, of an online fan group for boy bands. The most infamous example of apparent regulatory revenge occurred when Ant Financial’s initial public offering (IPO) was blocked after Alibaba’s co-founder Jack Ma gave a TED talk that exceeded the bounds of acceptable critique. This was followed by the blocking of a second IPO — Didi’s — and the subsequent first time and extensive cybersecurity review of the company. Education technology and the online tutoring market it fosters became another target of the campaign: after Xi called academic training services a “social problem,” regulators swiftly chipped away at what was deemed excessive and unequal private tutoring practices among China’s better-off families. In another attempt at unsolicited parenting, the government banned minors from spending more than one hour a week playing video games.

Distinctly empowered by the CCP’s authoritarianism, these mandates exceed the bounds of purely regulatory protectionism. China’s AI regulations are designed to allow the CCP (with Xi “at its core”) to not only influence China’s cultural landscape but also curate it. Clyde Yicheng Wang and Zifeng Chen’s paper “From ‘Motherland’ to ‘Daddy State’” traces how nationalism and patriarchal rule have become mutually reinforcing in post-reform China. The state’s propagation of patriarchal and hierarchical values “personifies the party-state as a dutiful patriarch and presents its integrated control as a caring act of spiritual guidance.”

Reform-era capitalist norms, Wang and Chen argue, generated newly gendered modes of nationalist expression. Without the mass movements of the Mao era, paired with the post-Tiananmen clarity that political participation would not be tolerated, “the self-identification of Chinese people had nowhere to turn other than the freshly burgeoning market and newly emerged economic stratifications.” Defining success within these new social norms brought gendered implications: “masculinity is the righteous way to achieve success, justifying the subordination of the rest of society beneath the manly big money,” Wang and Chen write. The CCP had to find a way to remain the authority figure even as private companies captured the ambitions and imaginations of Chinese youth. The Party’s monopoly on official morality proved useful in this effort.

As Chinese society exited the ultra-doctrinal Mao era, money started to matter again. Those without it became disenfranchised. Wang and Chen describe the permeating sense that rich people could “do whatever they want” (为所欲为), a mindset that became highly evident in the tech industry. On the same day in 2014 that Jack Ma was ranked the wealthiest person in Asia, he put out his first Weibo post, which received about 8,000 comments. Thousands of them called him “daddy” (爸爸 baba). According to Wang and Chen, “[Jack] Ma, as the biggest supplier for Chinese consumers, is to some extent the sugar daddy of the nation.”

Perhaps it was the idolization of Jack Ma among China’s youth that partly drove Xi Jinping’s decision to punish him by blocking his IPO; there is room for but one national-level sugar daddy in China — and that is Xi. The tech crackdown that followed Ant Financial’s blocked IPO (valued at $34 billion) can be understood in part as a continuation of the same impulse: Xi will do everything he can to ensure no media depictions of men will glorify alternative forms of masculinity in Chinese society — and no CEO will threaten his status as master patriarch. (Neither, for that matter, will any woman. Last year’s 20th Party Congress broke tradition by not resulting in the promotion of a single woman to the top Party body.) China’s tech industry, meanwhile, is male-dominated. Promotion criteria, including reaching the status of tech “guru” (大佬) — a highly gendered term also translated as “godfather” — represents masculinized norms of achievement and competition throughout the field.

Recent policy changes suggest a move away from sweeping industry-targeted crackdowns — but not an abandonment of the controlling compulsions that inspired them. Bank regulator Guo Shuqing’s January 2023 announcement that China’s technology rectification was “basically over” was interpreted by markets and industry as a step in the right direction. The AI regulations that outlast campaign-style crackdowns, meanwhile, can be characterized in part as indicators of effective governance — but they are not free from the moralistic constraints of an authoritarian regime that has betrayed its fixation with gender norms and expression.

In a speech to the National People’s Congress (NPC), Huang Yiping, deputy dean of the National School of Development at Beijing University, suggested that the leadership abandon “campaign-style rectification” and drop the “disorderly expansion of capital” line in favor of more pointed instructions, like telling tech companies to avoid “intervening in politics and influencing ideology.”4 Narrowing down the scope, he argued, will help “avoid the appearance of enlarged interpretations during policy implementation.”5 Platform companies likely applied “enlarged interpretations” in an effort to immunize against possible accusations of noncompliance in the context of China’s sometimes nebulous and retaliatory legal structure.

Ramping Up Regulation

The Chinese government makes high-level investments in AI as well as (not despite) pioneering regulations of the technology. In contrast, despite its willingness to get involved in moralistic campaigns at home and abroad, the United States has not displayed comparable willingness or ability to introduce responsible regulation for a technology that is already, not hypothetically, affecting the way we imagine the future — if not necessarily determining how it will unfold. All arms of the federal government could be doing more coordinated work to set the standards for a technology with nearly unmatched capacity to change living and working conditions around the world. If they do not, the CCP’s inherently compromised moral authority, exhibited in part by its pervasively patriarchal impulses, might play a leading global role in determining AI’s ethical standards.

Somewhat ironically, losing that battle is antithetical to the competitive interests outlined in U.S. strategy documents. Outperforming adversaries is a clear motivation for U.S. investment in developing technologies, as is directly articulated in the National Cybersecurity Strategy, the National Security Strategy, and the National Defense Strategy.

On a more fundamental, if diffuse, level, winning is among the most deeply rooted sensations embedded in the American psyche. If America’s obsession with beating China prevents it from regulating AI in the first place, China’s obsession with total control is what drives the faults in its regulations. China’s leaders care about competition with the United States. But most cadres’ promotion and incentive structures are tied more to ensuring domestic control than tit-for-tat wins over the other superpower.

To the United States, China’s success represents the first challenge — one that some lawmakers are dangerously and erroneously deeming “existential” — to uncontested U.S. supremacy, technological and otherwise. Anyone who has everything to lose tends to overreact or, in this case, indiscriminately invest and under-regulate to keep the top spot. China will not wait while the United States weighs its options on how or if to regulate artificial intelligence. It will continue to set precedents, standards, and frameworks for AI governance that contain both strong examples of necessary protections and patriarchal overreach unsuitable and unacceptable for a liberal democracy.

Regulations that defend a clearly defined ethical framework and protect human safety should be considered fundamental to America’s economic and technological development. Leaders in Washington, D.C., should not let AI’s high-tech and resultant obscure nature protect it from enforceable regulations that prioritize the mental and physical health of humans. In the process of condemning the overt authoritarianism present in China’s tech governance, U.S. lawmakers should therefore not throw the good governance baby out with the patriarchal bathwater. Given the global urgency of inventing metrics and ethics with which to assess artificial intelligence, stakeholders should acknowledge the Chinese government as a trailblazer in AI regulation. Selective lessons might be learned from it. As the U.S.-China tech war continues to escalate, the race to regulate should catch up with the race to innovate.

No comments: