22 August 2020

“DEEPFAKES” AND THE LAW OF ARMED CONFLICT: ARE THEY LEGAL?

by Eric Talbot Jensen, Summer Crockett 

The use of misleading “deepfakes” has risen dramatically across the globe. As with so much of emerging technology, deepfakes will inevitably become a part of armed conflict. While perfidious deepfakes would almost certainly violate the law of armed conflict, those that amount to ruses would not. Other considerations about the impact on the civilian population are also necessary to determine what uses of deepfakes in armed conflict would be legal.

“Sir, you had better take a look at this. I just received this video from higher headquarters.” The Commander walked over to the desk of his communications specialist. On his monitor was a video message from the Blue Republic’s Chief of Defense Staff: 

“We have just brokered a surrender from the Redland forces. We anticipate they will be approaching your front lines within the hour to complete the surrender. As part of the surrender, we have committed to have our soldiers stand down and refrain from the appearance of any hostile activity. So, have your forces stand down and prepare to receive the Redland forces.”

Cheers immediately erupted in the command tent. Their forces had been under heavy attack from the Redland military and were close to having to withdraw from their defensive positions in the city of Azure.


“Can you confirm that is legitimate?” the commander asked his communications specialist.

“It has every indication of being authentic. I have tried to reach back by different means of communication, but can’t get through,” the communications specialist replied.

News had traveled fast. The commander could hear soldiers and even some civilians who had come out of their places of refuge celebrating in the city streets outside. Another report came into the command post. “Sir, the outposts are reporting that large numbers of Redland forces are slowly approaching the city. What should we do?”

Several hours later after the Redland forces had entered the city and then suddenly initiated violent attacks against the Blue Republic soldiers, the Blue Republic commander stood before the commander of the Redland forces. When he angrily raised the question about the anticipated surrender, the Redland commander could hardly repress his glee. “Oh, you mean the faked video we made of your Chief of Defense Staff announcing our surrender? You shouldn’t believe everything you see,” he laughed and signaled his soldiers to take the Blue Republic commander away.

Deepfakes

While hypothetical, the above scenario is not unrealistic.

The ability to create deepfakes—a clever manipulation of technology that presents someone doing or saying something they have never done or said—is proliferating (see here, here, and here). While early deepfake technology crudely superimposed the facial images of famous people on the bodies of porn stars, the technology has become highly effective. It is now used in more productive and potentially dangerous ways (see here). Recent deepfakes have depicted President Obama swearing at President Trump and Nancy Pelosi in a drunken stupor. Russia has been accused of using deepfakes during the 2016 U.S. elections. Doctored footage has also been promulgated in an attempt to influence British opinions on migrants and refugees (see here). 

Researchers find that while people approach traditional news sites with a healthy skepticism, they abandon this skepticism when they read posts on social media platforms.

A 2017 New York Times article lists three factors that compound the problem. First, the platforms that promulgate deepfakes are designed to spread information so quickly that the information outstrips fact checkers. Second, because most deepfakes are received from “friends,” the recipient assumes they are trustworthy. Finally, receipt of the same message from multiple “trusted” sources adds credibility to an otherwise unbelievable story. And those most familiar with deepfake technology argue that refuting a faked story in a timely manner, once it has been publicly released, is nearly impossible.

Deepfakes in Armed Conflict

As the scenario at the beginning of this post indicates, deepfake technology will become too useful and effective in armed conflicts to resist. While few uses of deepfakes would be prohibited per se by the law of armed conflict, any perfidious use would be unlawful. Other uses intended to terrorize the population or violate the constant care obligation would also violate the law.

The Line between Ruse and Perfidy

The most obvious unlawful use of deepfakes in armed conflict would be those uses that are perfidious. Article 37 of AP I defines perfidy as “acts inviting the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with intent to betray that confidence.”

Article 37 provides examples of perfidy, should the actions result in death, injury, or capture.[1] One of those examples is the “feigning of an intent to negotiate under a flag of truce or of a surrender.” As illustrated by the simplistic scenario at the beginning of this post, deepfakes could be used to feign surrender and then conduct attacks on the receiving force. Because such attacks would only be successful through the victim’s reliance on the law of armed conflict, this use of deepfakes would be perfidious, and therefore unlawful.

On the other hand, it is easy to envision uses of deepfakes that would not amount to perfidy but would instead be considered a lawful ruse. For example, a deepfake communication from a commander to manipulate the movement of forces or military supplies would be a mere ruse. Similarly, a deepfaked video including inaccurate intelligence information might significantly impact the conduct of military operations but would also be a ruse.

Deepfakes like these would offer the potential to lawfully deceive and misinform adversaries and gain significant military advantage. They will inevitably be used on the battlefield. Such uses—as long as they don’t amount to perfidy—are almost certainly lawful. Yet an important question as to their legality remains—the question of their impact on civilians, to which we now turn.

Impact on Civilians

Not all ruses or other non-attack uses of deepfakes are per se lawful. However, one of the uses of deepfake technology likely to violate the law of armed conflict is when it is employed to affect, influence, or deceive civilians.

Two articles from AP I greatly limit the use of deepfakes. First, Article 57.1 clearly establishes a standard of constant care for civilians and their property during armed conflict. Second, Article 51.2 explains that civilians shall not be subjected to threats or acts of violence to spread terror.

In the example above, civilians heard from Blue Republic soldiers that Redland was going to surrender because of the reliable yet false information in the deepfake. The civilians then left their areas of safety. Thus, when Redland launched their attack, the number of civilians caught in the conflict was much higher than it otherwise would have been. Redland may have violated their duty of constant care of civilians.

Deepfakes can also easily be implemented as tools to cause panic among civilians. For example, video content could claim a nuclear attack, severe natural disaster, or biological attack is imminent, while in actuality the fake clip was only meant to incite hysteria.

The risk of significant detrimental effects on civilian populations greatly increases when deepfakes are promulgated through publicly available sites—especially social media platforms—because of their perceived reliability and believability. Such uses might cause terror and violate the constant care provision in Article 57. To protect civilians, States must regulate the use of deepfakes to ensure compliance with the law of armed conflict.

Potential Solutions

How the law of armed conflict approaches deepfakes now can have ripple effects for centuries of warfare. Solutions, both from domestic and international law sources, need to be created to help protect civilians, ensure constant care, and avoid acts of terror. Some feasible options are: banning their use, or requiring platforms to filter content, to establish effective means of securing alibis, to watermark content, or to create authenticated platforms.

Banning Use. Indiscriminate deepfakes are highly dangerous. Banning publicly available deepfakes in armed conflicts through treaties or customary international law presents several benefits. First, a ban protects the social contract of society. When populations cannot trust what they are seeing in the media to be factual—if it is being presented as such—then a breakdown of the trust follows. Second, there is historical legal precedent in the law of armed conflict suggesting that certain methods or means of warfare—such as deepfakes—may require a ban.[2] Prohibiting this technology through international or domestic law would ensure civilians are protected from the lack of trusted information and would mitigate the spread of terror.

Platforms Filters. A second proposal suggests enacting domestic laws or policies to require social media platforms to create filters, detection technology, or flagging capabilities to mark deepfakes. Because most deepfakes spread through these platforms, it would be easy and efficient for the platforms to monitor their content. This option would greatly decrease the chances that deepfakes used in armed conflict would reach civilians and incite terror or violate the constant care provision.

Alibis. Using electronic devices to set up viable alibis for soldiers, high profile political professionals, and government leaders may also be a way to mitigate the effects of deepfakes on civilians. Several companies, that are unaffiliated with governments, currently track the location of millions of phone users by pinging devices through the phones’ signals. While this raises privacy concerns, it may have the side effect of proving where a person is at a given time and what they may be doing. Using this data in armed conflict would enable military personnel to verify the person in the deepfake. Through verification, armed forces could discredit deepfakes more quickly and protect civilians from harmful effects.

Watermarking. Watermarking, or certifying valid content—especially related to important news, political figures, or footage showing soldiers performing acts—would be another option. A watermark could easily become the standard through domestic law avenues or could quickly become customary international law by instituting it as a common practice. Watermarking video content would make it easier for civilians and military personnel to ensure that the videos they view are coming from a trusted source and contain actual, real, and true footage. 

Authenticated Platforms. Finally, authenticated platforms can protect civilians and nations. These platforms would be verified, and all uploaded content would be as true and accurate as possible. Domestic or municipal law approaches would likely be required. Civilians seeking verified, true, and accurate information would have reliable sources during armed conflict. While there are some concerns with this solution, authenticated platforms can potentially recirculate a free market exchange of ideas because the information therein would have been previously verified in order to protect civilians from misinformation or acts of terror.

Conclusion

Deepfakes present an inevitable innovation in the way armed conflicts are fought. As such, it is vital to determine which uses violate the law of armed conflict. Clearly any perfidious use—such as deepfaked videos to feign surrender in order to facilitate an attack—will be unlawful. Additionally, the use of deepfakes that spread terror among the population would also be unlawful. Finally, commanders must be aware of their obligation to use constant care to spare the civilian population in all military operations. While egregious uses of deepfakes might also violate this obligation, the law of armed conflict will need to continually adapt to these technological advances to ensure international law is followed and civilians are protected.

Eric Talbot Jensen is a Professor of Law at Brigham Young University in Provo, Utah.

Summer Crocket is a third-year law student at Brigham Young University and research assistant to Professor Jensen.

No comments: