31 August 2023

Deep-fake content is worming its way into our lives through our newsfeeds

MICHAEL WILKOWSKI

We are long past the point where an untrained eye can distinguish signs of artificial intelligence-generated images or videos, popularly known as “deep-fakes.”

The latest chapter of the “Indiana Jones” franchise features an extended opening sequence with a de-aged Harrison Ford so realistic that one could suspect the filmmakers travelled back to the 80s.

Social media Images of a supposed explosion at the Pentagon in May raised fears of another 9-11 and actually caused a market selloff before they were exposed as fakes. Quite simply, we are being trained to accept that we cannot believe our eyes.

But while the mainstream media and opinion-makers have done much to highlight the dangers of these images, little consideration has been given to the problem of deep-fake text, which isn’t new and has been more quietly with us for many years.

For the past decade, I’ve worked as chief technology officer at Silent Eight, a company that leverages artificial intelligence (AI) to help financial institutions and banks such as HSBC, First Abu Dhabi and Standard Chartered fight financial crime.

Much of what I do, both as an engineer and businessperson, involves both interpreting and generating text.

Consequently, I’ve observed with growing frustration how the effectiveness of the algorithms and machine learning of the Big Tech companies has only helped to exacerbate this problem.

When I first used the internet in the late 1990s, it was exciting to know that we were gaining access to all of these people and windows to different worlds. Perhaps the coolest thing was the sense of impartiality. Everyone, regardless of location, was at last receiving the same data.

But now the world has changed completely, because every website you access checks your location, keyboard settings, preferred language, time zone and internet protocol address. This could create a different experience for each user, which may be beneficial when it comes to, say, personalized retail shopping.

But once a website’s software discovers your address, it builds a graph of your connections, classifies you as part of a community, and begins to serve you news relevant to that specific community — news that the algorithm judges relevant.

Now there’s no need to search for information. Just open your phone or tablet and scroll down your news feed of choice. Even using a virtual private network won’t completely disguise your location, as the software is likely to check the dates and times of your web usage to identify your whereabouts from your time zone.

Under these circumstances, it’s very difficult for people to avoid being categorized and “siloed” into a media bubble where they only receive news that’s deemed relevant to their specific community. These algorithims get smarter every time you pause — teaching what you are interested in, what do you click on, and what do you move straight past.

However, if you are able to operate from another location, you will immediately start to get a different response and the content in your news feed will change.

One consequence of this engineering by Big Tech chatbots is that if you start to read something about the Democratic Party in America, for example, the software will keep providing information about Democrats and not give you anything on the opposition party.

Likewise, if you start to read something about the Republicans, you’re unlikely to see much news about the Democrats in your feed. The end result is that we end up receiving lots of similar and heavily-filtered news stories in our feeds.

And this is how deep-fake text affects our daily lives.

These algorithms have become brilliant at learning our profiles and feeding us relevant content. Consequently, we’ve become very attached to them, which only encourages the process and makes it harder to exit the communities and media bubbles into which we have been sorted.

While this may not appear to be of immediate concern, our dependence on these technologies has serious implications for the future. I believe that over the next 10 years, people will stop visiting and reading websites, and professional journalism and editing jobs will disappear.

Most people are only going to see what’s in their feeds and a generative AI model such as ChatGPT will help to produce summaries based on their profiles and requests. But they’ll have no idea if the news in their feed is real, fake, computer-generated, or produced by a human.

For now, the Big Tech companies continue to fight against the proliferation of deep-fake content created by bots that behave like people.

But this is a potentially lucrative market. So instead of trying to stop people making deep-fakes, eventually they’ll probably encourage people to introduce them — only just use a certain programming interface and pay a small fee.

When we reach that stage it will be impossible to tell if our information is coming from a bot.

Politicians will love this, because these tools help to craft influential messages. Data targeting companies such as Cambridge Analytica, SCL Elections, and AggregateIQ have already been accused of influencing political campaigns and elections. The advent of authorized deep-fake chatbots will simply further their powers of persuasion.

It seems that we’re heading into a time where it will become harder to differentiate between right, wrong, good, and bad because we’ll have become so used to consuming opinion rather than fact.

Right now you can still tell the difference between real and deep-fake content, like de-aged action heroes. But soon, you won’t be able to, because these programs are becoming smarter and more accurate.

Nevertheless, I think by this time, we will have become so used to the concept of deep-fake content that we will neither mind nor notice that we’re no longer communicating with humans.

Michael Wilkowski, chief technical officer at Silent Eight, is a security and telecommunications expert with over 20 years’ experience in software development, databases, and infrastructure.TAGS DEEP FAKES

No comments: