Pages

14 December 2021

The Rise Of Voice Cloning And DeepFakes In The Disinformation Wars

Jennifer Kite-Powell

In 2020, it was estimated that disinformation in the form of fake news costs around $78 billion annually. But deepfakes, mainly in social media, have matured and are fueled by the sophistication of artificial intelligence are moving into the business sector.

In 2019, Deeptrace, a cybersecurity company reported that the number of online deepfake videos doubled, reaching close to 15,000 in under a year.

Several startups like Truepic, that’s raised $26 million from M12, Microsoft's venture arm, has taken a different approach to deepfakes. They focus on identifying not what is fake, tracking the authenticity of the content at the point it is captured.

Yanchev says that this is especially true if those images are distributed for commercial or ideological purposes.

"Consider unique nature of personal data such as voice image biometrics that are being processed by machine learning algorithms and the impact a deep fake may have on the real person if misused," said Yanchev. "Fake ID verification on primary IT services - phone, email, online rental, and money transfers - even if not strictly misused is challenging to justify under GDPR or CCPA without the approval of the subject of the deep fake."

According to Experian, in a rush to digital, both consumers and businesses are significantly more reliant on the technology platforms and devices throughout their daily lives.

David Britton, vice presisdent of industry solutions, global ID and fraud at Experian, said the digital world is still an anonymous environment. "It is extremely difficult to know who is on the other end of the wire and this continues to drive the rise in fraud against both businesses and consumers directly.”

"Voice cloning is part of a broader technology set designed to emulate human physical attributes and includes artificially created images, video and voice, generally known as deep fakes,” said Britton. “The technology is being used for legitimate purposes, but fraudsters can 
Voice cloning takes snippets of a recorded text from a person and applies artificial intelligence (AI) to dissect the speech patterns from the voice samples samples. This gives the user the ability to create audio recordings or streams that weren’t spoken by the voice owner.

Britton says that voice cloning can be applied in several ways: from helping people who have lost their voice to communicate in their voice; to allowing content creators to use voice actors and reduce the time required of the actor on a project; or for the creation of entertainment content, like video games or films, where the creators need to quickly produce consistent voice content even if the actor is no longer available, or if they passed away before the project being released.

But he warns that fraudsters are also using this same technology to begin to create more authentic-sounding impersonations.

"This will allow them to successfully pass voice biometrics systems, or to dupe family members or acquaintances via phone, to send funds or to authorize approvals for access to sensitive systems, or to distribute funds to the fraudster," said Britton.

Britton says consumers need to be aware that fraudsters continue to leverage technology to steal data such as credentials, personal information, or money by either attacking the victim's bank or communicating with the victim directly.

"Consumers need to be vigilant to understand that these emerging threats exist, and while they aren't yet widely used today, we believe they will be increasingly popular among fraudsters," said Britton. "Consumers should pay close attention to voice messages or phone calls that sound like someone they know, who is asking for information or funds, particularly if it seems out of character for that individual."

Britton says voice cloning can also be challenging for government leaders as tools that opponents or state-sponsored attackers may use to spread misinformation.

"Fraudsters can also use the technology to create effective social engineering attacks and impersonate a known acquaintance via a phone call or voicemail message," said Britton. "It is also possible that voice cloning could be used to bypass voice-based biometric systems during digital authentication processes."

In non-criminal cases, Britton says there are emerging issues and questions about the content creator's authority to use another person's voice to create content that the voice owner never recorded.

“A recent case of this was reported in June 2021 where a director used voice cloning to use the late Anthony Bourdain's voice to say a phrase he never [..] said,” addes Britton. “As these technologies advance, other questions emerge around the creator's rights to create content from voice actors or others that may not have given express permission or have not been compensated for using their voice.”

No comments:

Post a Comment