3 June 2022

The AI Defending Ukraine

Will Lockett

It seems these days AI can make everything better, from engineering insanely fast cars to making farms more efficient, to even unlocking the elusive power of nuclear fusion. But in Ukraine a controversial AI is being repurposed to undermine the Russian war machine. The potential damage it could cause the Russians is enormous, but some question if it should be used at all? So, welcome to the murky water that is Clearview AI.

Firstly, what is Clearview AI? In a nutshell, it is an incredibly powerful facial recognition AI, able to identify people in photos with high accuracy even after a facial injury. That part isn’t the controversial part. Instead, it is where Clearview gets its data from. To work, it needs a truly gigantic dataset of photos. These photos also need to be linked to a name, email address or another form of identity. Rather than go about this honestly Clearview scrapes this data off social media. So those hundreds of photos of you on Facebook that you’re tagged in? Clearview has likely saved all of them, linked them to your address, name and even electoral roll, and trained their AI to spot you in any image.

Many see this as a violation of privacy. In fact, the EU sees it as a massive breach of data protection. So they have banned Clearview from operating in EU countries and have even issued them with fines.

But how can Clearview help with the war in Ukraine?

Well, Clearview has scraped 2 billion images off the Russian social network VKontakte. That, combined with their global database of 10 billion images, means that they can easily identify Russian soldiers, find their addresses and even get the contact details of relatives!

So when war broke out, Clearview offered its services to Ukraine, who initially decided not to use it. But as the refugee crisis worsens, Russian war crimes continue and Russian propaganda ramps up, Ukraine saw that they could use Clearview with devastating consequences and put it into action.

However, even the CEO of Clearview doesn’t know the exact details of how Clearview is being used in practise, but we can make some educated guesses.

One of the significant potential uses is undermining Russian propaganda. There are a lot of videos popping up online from the war all showing different narratives. Some show civilians being shot down, dead Russian soldiers, Russian prisoners and straight-up war crimes by Russian soldiers. Of course, Russia claims these videos are fakes and show Ukrainian actors or even Ukrainian civilians in prison being forced to wear Russian military uniforms. This eggs on their soldiers as it paints Ukraine as a deeply anti-Russian country. However, Clearview can identify the people in these videos, proving their nationality and providing their address and even next of kin contact details.

In other words, the whole world can quickly and easily identify when Russia is lying and spreading false narratives. So far, it seems that Clearview hasn’t found a single video of Russian soldiers that Ukraine faked.

Another potentially devastating application of Clearview AI comes into play with Russian cadavers.

Ukraine has estimated there have been 18,900 Russian casualties so far. But it seems the Russian military isn’t informing next of kin of a soldier’s death. As none of the soldiers have a way to contact home, their parents, siblings and friends have no idea if they are alive or dead, or indeed, of the immense Russian losses. But Clearview could be used to change. All that Ukraine needs to do is scan the face of the deceased, find their relatives or friends’ contact details, and send them a message.

This makes Clearview a potentially potent psychological weapon that hits deep behind enemy lines.

Another powerful use of this technology is to identify Russian soldiers who have committed war crimes. Vast amounts of Ukraine are peppered with modern CCTV that is still functional and plenty of civilians have mobile recording devices. This means that some of the Russian soldiers’ atrocities are recorded such as unlawful killings and rapes. Clearview can identify them in this footage and, yet again, provide contact details for family and friends of that individual.

Imagine being a Russian wife who’s husband has gone to what they think is a patriotic ‘military operation’ only to be sent a video of him committing atrocious war crimes against innocent people? If that doesn’t undermine morale I don’t know what will.

Finally, Clearview can be used to identify Russian saboteurs. Since day one of this war Russian, and potentially plain-clothed Russian soldiers, have infiltrated Ukrainian defenses looking to screw up plans or even kill Ukrainian soldiers. By using Clearview at checkpoints the Ukrainian military can quickly identify these saboteurs and put a stop to their devious work.

But, is any of this morally correct?

Well, firstly, Clearview isn’t perfect. It can misidentify people. The CEO has even stated that it shouldn’t be used as the only identification method. So, if it is used with too much confidence or as the only tool for devastating psychological warfare, it can backfire catastrophically. For example, what if it misidentifies a saboteur and an innocent Ukrainian gets killed? What about if the wrong Russian is pinned for a war crime?

But, even if we ignore this problem, Clearview is a deeply troubling piece of technology. If used correctly it can be a helpful hand that can give Ukraine an edge. If used in the wrong way, it could be seen as a decisive, morally bankrupt and malicious war tactic that causes mass trauma and even deaths of innocent people. The world just isn’t set up to deal with such a weird and powerful weapon yet.

But for now, it seems that Ukraine is using Clearview with an air of caution. It took them time to accept it and we haven’t yet heard of any devastated Russian families, identified war criminals or saboteurs being shot on the spot, which is encouraging. After all, if we break our moral code in war, then what are we really fighting for?

But, we know that an AI is being used in war, possibly for the first time ever. For now, it seems it’s being used conservatively, but there is the potential for this technology to open up a can of worms as psychological warfare and cyber warfare collide.

No comments: