19 July 2023

Deep Fakes and National Security

DANIEL PEREIRA

“In 2024, one billion people around the world will go to the polls for national elections. From the US presidential election in 2024 to the war in Ukraine, we’re entering the era of deepfake geopolitics, where experts are concerned about the impact on elections and public perception of the truth.

The Geopolitics of Deepfakes (Project Liberty)

Project Liberty, in a 2023 e-mail newsletter, explored “deepfakes and state-sponsored disinformation campaigns, what it means for the future of geopolitical conflict, and what we can do about it.”

Deception for geopolitical gain has been around since the Trojan horse. Deepfakes, however, are a particular form of disinformation that has emerged recently due to advances in technology that generate believable audio, video, and text intended to deceive.

Generative AI tools like Midjourney and OpenAI’s ChatGPT are being used by hundreds of millions each month to generate new content (ChatGPT is the fastest-growing consumer application in history), but they are also the tools used to create deepfakes.

Henry Adjer, an independent AI expert told WIRED, ‘To create a really high-quality deepfake still requires a fair degree of expertise, as well as post-production expertise to touch up the output the AI generates. Video is really the next frontier in generative AI.’” (1)

Fake vids, real geopolitics

Even if deepfake videos aren’t perfect, they’re already being used to shape geopolitics. The war in Ukraine could have gone very differently had Ukrainian soldiers believed the deepfake video from March 2022 of President Zelenskyy calling on his Ukrainian soldiers to lay down their arms.

The video was quickly diagnosed as a deepfake and taken down from social media: Zelenskyy’s accent was off, and both the audio and video had signs of doctoring.

It could only be a matter of time before deepfakes are used to escalate conflict between China and Taiwan (Taiwan receives more fake news online than any other country in the world, according to the Digital Society Project). In February, The New York Times reported on the first instance of a state-aligned disinformation campaign when the Chinese government used deepfake videos to create entirely fake personas of broadcasters to advance pro-China views, where both voice and image were 100% computer-generated.
Google CEO sounds alarm on AI deepfake videos: ‘It can cause a lot of harm’

CEO Sundar Pichai says Google is intentionally limiting Bard AI’s public capabilities

Google CEO Sundar Pichai says Google is intentionally limiting Bard AI’s capabilities as he warns about the imminent ease of using AI to create deceptive “deepfake” videos of public figures.

Details:

Current AI models can already generate highly realistic images of public figures, blurring the line between fiction and reality

Video and audio fabrications are becoming more sophisticated by the day

Pichai said that Google does not always understand the answers Bard AI currently provides

AI misinformation and scams are already very real and will only worsen as AI continuously advances. (2a)
Deep Fakes and National Security (Congressional Research Service)

“Deep fakes”—a term that first emerged in 2017 to describe realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies—could present a variety of national security challenges in the years to come. As these technologies continue to mature, they could hold significant implications for congressional oversight, U.S. defense authorizations and appropriations, and the regulation of social media platforms.

How Are Deep Fakes Created?

Though definitions vary, deep fakes are most commonly described as forgeries created using techniques in machine learning (ML)—a subfield of AI—especially generative adversarial networks (GANs). In the GAN process, two ML systems called neural networks are trained in competition with each other. The first network, or the generator, is tasked with creating counterfeit data—such as photos, audio recordings, or video footage—that replicate the properties of the original data set. The second network, or the discriminator, is tasked with identifying the counterfeit data. Based on the results of each iteration, the generator network adjusts to create increasingly realistic data. The networks continue to compete—often for thousands or millions of iterations—until the generator improves its performance such that the discriminator can no longer distinguish between real and counterfeit data.

Though media manipulation is not a new phenomenon, the use of AI to generate deep fakes is causing concern because the results are increasingly realistic, rapidly created, and cheaply made with freely available software and the ability to rent processing power through cloud computing. Thus, even unskilled operators could download the requisite software tools and, using publically available data, create increasingly convincing counterfeit content.
How Could Deep Fakes Be Used?

Deep fake technology has been popularized for entertainment purposes—for example, social media users inserting the actor Nicholas Cage into movies in which he did not originally appear and a museum generating an interactive exhibit with artist Salvador Dalí. Deep fake technologies have also been used for beneficial purposes. For example, medical researchers have reported using GANs to synthesize fake medical images to train disease detection algorithms for rare diseases and to minimize patient privacy concerns.

Deep fakes could, however, be used for nefarious purposes. State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately. Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election. Indeed, the U.S. intelligence community concluded that Russia engaged in extensive influence operations during the 2016 presidential election to “undermine public faith in the U.S. democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency.” Likewise, in March 2022, Ukrainian President Volodymyr Zelensky announced that a video posted to social media—in which he appeared to direct Ukrainian soldiers to surrender to Russian forces—was a deep fake. While experts noted that this deep fake was not particularly sophisticated, in the future, convincing audio or video forgeries could potentially strengthen malicious influence operations.
How Can Deep Fakes Be Detected?

Today, deep fakes can often be detected without specialized detection tools. However, the sophistication of the

technology is rapidly progressing to a point at which unaided human detection will be very difficult or impossible. While commercial industry has been investing in automated deep fake detection tools, this section describes U.S. government investments and activities.

The Identifying Outputs of Generative Adversarial Networks Act (P.L. 116-258) directed NSF and NIST to support research on GANs. Specifically, NSF is directed to support research on manipulated or synthesized content and information authenticity, and NIST is directed to support research for the development of measurements and standards necessary to develop tools to examine the function and outputs of GANs or other technologies that synthesize or manipulate content.

In addition, DARPA has had two programs devoted to the detection of deep fakes: Media Forensics (MediFor) and Semantic Forensics (SemaFor). MediFor, which concluded in FY2021, was to develop algorithms to automatically assess the integrity of photos and videos and to provide analysts with information about how counterfeit content was generated. The program reportedly explored techniques for identifying the audio-visual inconsistencies present in deep fakes, including inconsistencies in pixels (digital integrity), inconsistencies with the laws of physics (physical integrity), and inconsistencies with other information sources (semantic integrity). MediFor technologies are expected to transition to operational commands and the intelligence community.

Figure 1. Example of Semantic Inconsistency in a GAN-Generated Image

SemaFor seeks to build upon MediFor technologies and to develop algorithms that will automatically detect, attribute, and characterize (i.e., identify as either benign or malicious) various types of deep fakes. This program is to catalog semantic inconsistencies—such as the mismatched earrings seen in the GAN-generated image in Figure 1, or unusual facial features or backgrounds—and prioritize suspected deep fakes for human review. DARPA requested $18 million for SemaFor in FY2024, $4 million under the FY2023 appropriation. Technologies developed by both SemaFor and MediFor are intended to improve defenses against adversary information operations. (2b)
Democracy isn’t ready for its AI test (Axios)

AI-generated content is emerging as a disruptive political force just as nations around the world are gearing up for a rare convergence of election cycles in 2024.

Why it matters: Around one billion voters will head to polls in 2024 across the U.S., India, the European Union, the U.K. and Indonesia, plus Russia — but neither AI companies nor governments have put matching election protections in place.

State of play: Election authorities, which are often woefully underfunded, must lean on existing rules to cope with the AI deluge.AI startups tend to have few or no election policies.
After initially banning political uses of ChatGPT, OpenAI is now focused on banning “high volumes of campaign materials” and “materials personalized to or targeted at specific demographics.”

How it works: AI could upend 2024 elections via…Fundraising scams written and coded more easily via generative AI.
A microtargeting tsunami, since AI lowers the costs of creating content for specific audiences — including delivering undecided or unmotivated voters “the exact message that will help them reach their final decisions,” according to Darrell West, senior fellow at Brooking Institution’s Center for Technology Innovation.
Incendiary emotional fuel. Generative AI can create realist-looking images designed to inflame, such as false representations of a candidate or communities that are targets of a party’s ire.

Social media platforms, meanwhile, are cutting back on their election integrity efforts.Meta’s election teams face an uncertain future, with another round of company layoffs expected this month. Meta policy communications director Andy Stone declined to comment on how the company is adjusting its election efforts for AI. The company spent more than $13 billion since 2016 on safety and security measures after Russian disinformation flooded Facebook during the 2016 campaign.
Twitter’s slashed staff struggled with misinformation in the 2022 midterms, and owner Elon Musk is fueling mistrust, tweeting Tuesday: “Trust nothing.”

Between the lines: Newer platforms have little experience of big elections, let alone six in one year, and fewer local offices than more established rivals.TikTok has moved aggressively to hire laid off Meta staff with election experience, according to current and former Meta employees Axios spoke to.

Of note: Secretary of State Antony Blinken announced in a speech Tuesday that the State Department has developed an AI-enabled content aggregator “to collect verifiable Russian disinformation and then to share that with partners around the world.”

What they’re saying: Katie Harbath, who led Facebook’s election efforts from 2013 to 2019, and is now a consultant, told Axios “the 2024 election is going to be exponentially more challenging than it was in 2020 and 2016.”“We have to get past finger-pointing” about past problems, Harbath said, and work instead on AI impacts. “Election plans cannot be spun up in days or weeks. Work should start 18 months to 24 months ahead of election day,” she added, noting that U.S. primary season begins in just seven months.
Allie Funk, tech research director at Freedom House, told Axios that generative AI is kicking off an era of “automated disinformation” and “will lower the barrier of entry for shady companies” selling election services.

On the government side, efforts to grapple with AI are just beginning.In Congress, Rep. Yvette Clarke (D-N.Y.) introduced a bill requiring disclosure when AI is used for political ads

Federal Election Commission vice chairman Sean Cooksey, a Republican, thinks AI ads can be regulated via existing rules.

The National Association of Secretaries of State has added AI developments into its “threat landscape,” spokesperson Maria Benson told Axios.

U.K. science and technology minister Chloe Smith told parliament last week that the U.K. government is establishing a “central coordinating function to seek out risks” ahead of the country’s 2024 election, but will apply existing laws to punish abuses. (3)

Open Hearing on Deepfakes and Artificial Intelligence

On Thursday, June 13, 2019 at 9:00 am, the House Permanent Select Committee on Intelligence convened an open hearing on the national security challenges of artificial intelligence (AI), manipulated media, and “deepfake” technology. This is the first House hearing devoted specifically to examining deepfakes and other types of AI-generated synthetic data. During this hearing, the Committee examined the national security threats posed by AI-enabled fake content, what can be done to detect and combat it, and what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, “post-truth” future.

Witnesses:Danielle Citron, Professor of Law, University of Maryland

Francis King Carey School of Law

Jack Clark, Policy Director, OpenAI

Dr. David Doermann, Professor, SUNY Empire Innovation and Director, Artificial Intelligence Institute, University at Buffalo (4)

What Next?

Ideally, a renewed version of the failed legislation itemized below – and some of the recommendations for Congress from the CRS – will take flight sooner rather than later.

This governmental activity still begs the question: What role will the private sector play to address this eminent threat? And what responsibility does the individual have to civic and digital media literacy to think critically and discern these threats?
H.R. 2395 (117th): DEEP FAKES Accountability Act

Overview: To combat the spread of disinformation through restrictions on deep-fake video alteration technology.


Status: Died in a previous Congress
Congress operates in two-year cycles that follow elections. Each cycle is called a “Congress.” This bill was introduced in the 117th Congress, which was from Jan 3, 2021 to Jan 3, 2023. Bills are not carried forward from Congress to Congress.

This bill was introduced on April 8, 2021, in a previous session of Congress, but it did not receive a vote.

Although this bill was not enacted, its provisions could have become law by being included in another bill. It is common for legislative text to be introduced concurrently in multiple bills (called companion bills), re-introduced in subsequent sessions of Congress in new bills, or added to larger bills (sometimes called omnibus bills).

Summary: The summary below was written by the Congressional Research Service, which is a nonpartisan division of the Library of Congress, and was published on Sep 21, 2021.

Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2021 or the DEEP FAKES Accountability Act:This bill establishes requirements for advanced technological false personation records (i.e., deep fakes) and establishes criminal penalties for related violations.

Specifically, it requires producers of deep fakes to generally comply with certain digital watermark and disclosure requirements (e.g., verbal and written statements).

It establishes new criminal offenses related to (1) the production of deep fakes which do not comply with related watermark or disclosure requirements, and (2) the alteration of deep fakes to remove or meaningfully obscure such required disclosures. A violator is subject to a fine, up to five years in prison, or both.

It also establishes civil penalties and permits individuals to bring civil actions for damages.
Additionally, it revises the criminal offense of fraud in connection with certain identification documents to include deep fakes.

The bill also directs the Department of Justice to take certain actions, such as publishing a report related to deep fakes that includes a description of the efforts of Russia and China to use technology to impact elections.

Software manufacturers who reasonably believe software will be used to produce deep fakes must ensure it has the technical capability to insert watermarks and disclosures. (5)

Policy Considerations

Some analysts have noted that algorithm-based detection tools could lead to a cat-and-mouse game, in which the deep fake generators are rapidly updated to address flaws identified by detection tools. For this reason, they argue that social media platforms—in addition to deploying deep fake detection tools—may need to expand the means of labeling and/or authenticating content. This could include a requirement that users identify the time and location atwhich the content originated or that they label edited content as such.

Other analysts have expressed concern that regulation of deep fake technology could impose undue burden on social media platforms or lead to unconstitutional restrictions on free speech and artistic expression. These analysts have suggested that existing law is sufficient for managing the malicious use of deep fakes. Some experts have asserted that responding with technical tools alone will be insufficient and that instead the focus should be on the need to educate the public about deep fakes and minimize incentives for creators of malicious deep fakes.

Potential Questions for CongressDo the Department of Defense, the Department of State, and the intelligence community have adequate information about the state of foreign deep fake technology and the ways in which this technology may be used to harm U.S. national security?

How mature are DARPA’s efforts to develop automated deep fake detection tools? What are the limitations of DARPA’s approach, and are any additional efforts required to ensure that malicious deep fakes do not harm U.S. national security?

Are federal investments and coordination efforts, across defense and nondefense agencies and with the private sector, adequate to address research and development needs and national security concerns regarding deep fake technologies?

How should national security considerations with regard to deep fakes be balanced with free speech protections, artistic expression, and beneficial uses of the underlying technologies?

Should social media platforms be required to authenticate or label content? Should users be required to submit information about the provenance of content? What secondary effects could this have for social media platforms and the safety, security, and privacy of users?

To what extent and in what manner, if at all, should social media platforms and users be held accountable for the dissemination and impacts of malicious deep fake content?

What efforts, if any, should the U.S. government undertake to ensure that the public is educated about deep fakes? (2b)

No comments: