Pages

18 August 2020

Get Ready For Deepfakes To Be Used In Financial Scams

Jon Bateman

Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Thankfully, this brazen act of digital impersonation only fooled a few hundred people. But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.

Deepfakes have inspired dread since the term was first coined three years ago. The most widely discussed scenario is a deepfake smear of a candidate on the eve of an election. But while this fear remains hypothetical, another threat is currently emerging with little public notice. Criminals have begun to use deepfakes for fraud, blackmail, and other illicit financial schemes.

This should come as no surprise. Deception has always existed in the financial world, and bad actors are adept at employing technology, from ransomware to robo-calls. So how big will this new threat become? Will deepfakes erode truth and trust across the financial system, requiring a major response by the financial industry and government? Or are they just an exotic distraction from more mundane criminal techniques, which are far more prevalent and costly?


The truth lies somewhere in between. No form of digital disinformation has managed to create a true financial meltdown, and deepfakes are unlikely to be the first. But as deepfakes become more realistic and easier to produce, they offer powerful new weapons for tech-savvy criminals.

Consider the most well-known type of deepfake, a “face-swap” video that transposes one person’s expressions onto someone else’s features. These can make a victim appear to say things she never said. Criminals could share a face-swap video that falsely depicts a CEO making damaging private comments—causing her company’s stock price to fall, while the criminals profit from short sales.

At first blush, this scenario is not much different than the feared political deepfake: a false video spreads through social or traditional media to sway mass opinion about a public figure. But in the financial scenario, perpetrators can make money on rapid stock trades even if the video is quickly disproven. Smart criminals will target a CEO already embroiled in some other corporate crisis, who may lack the credibility to refute a clever deepfake.

In addition to video, deepfake technology can create lifelike audio mimicry by cloning someone’s voice. Voice cloning is not limited to celebrities or politicians. Last year, a CEO’s cloned voice was used to defraud a British energy company out of $243,000. Financial industry contacts tell me this was not an isolated case. And it shows how deepfakes can cause damage without ever going viral. A deepfake tailored for and sent directly to one person may be the most difficult kind to thwart.

AI can generate other forms of synthetic media beyond video and audio. Algorithms can synthesize photos of fictional objects and people, or write bogus text that simulates human writing. Bad actors could combine these two techniques to create authentic-seeming fake social media accounts. With AI-generated profile photos and AI-written posts, the fake accounts could pass as human and earn real followers. A large network of such accounts could be used to denigrate a company, lowering its stock price due to false perceptions of a grassroots brand backlash.

These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving.


What can be done? It would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.

Corporate training and controls can help inoculate workers against deepfake phishing calls. Methods of authenticating customers by their voices or faces may need to be re-examined. The financial industry already benefits from robust intelligence sharing and crisis planning for cyber threats; these could be expanded to cover deepfakes.

The financial sector must also collaborate with tech platforms, law enforcement agencies, journalists, and others. Many of these groups are already working to counter political deepfakes. But they are not yet as focused on the distinctive ways that deepfakes threaten the financial system.

Ultimately, efforts to counter deepfakes should be part of a broader international strategy to secure the financial system against cyber threats, such as the one the Carnegie Endowment is currently developing together with the World Economic Forum.

Deepfakes are hardly the first threat of financial deception, and they are far from the biggest. But they are growing and evolving before our eyes. To stay ahead of this emerging challenge, the financial sector should start acting now.

Jon​ Bateman is a Cyber Policy Initiative, Technology and International Affairs Fellow at the Carnegie Endowment for International Peace.

No comments:

Post a Comment