Blog

Deep fakes: the new front in the disinformation wars

New programs are being developed to identify and combat the rise of synthetic video, but this growing threat cannot be confronted with technology alone.

President Zelensky was the target of a deep fake video

In a media landscape in which the truth often seems to get lost in noise and disinformation, the increasing use of deepfakes is a particularly worrying trend. Synthetic video content – that is, video which is wholly or partly generated by artificial intelligence – has been in widespread use since at least 2017. Until now, however, it has largely been used for non-consensual pornography, targeting women by using their likeness without their consent.

This year, this developing technology has been weaponised in the Russia-Ukraine war. In March, a deepfake video was released, apparently showing Ukrainian President Volodymyr Zelensky asking Ukrainian troops to lay down their weapons. It was unsuccessful. After the TV channel Ukraine 24 was hacked so that its website displayed the video, Zelensky was alerted and posted a Facebook video explaining that it was a fake. Twitter and YouTube were quick to announce that they were tracking and removing the video where it was being shared, as it breached their rules on deceptive synthetic media.

However, as a playbook for defeating deepfakes that the Carnegie Endowment for International Peace released in 2019 put it, while digital fakery can, for now, be detected pretty easily, “People have a visceral reaction to video and audio. They believe what their eyes and ears are telling them – even if all signs suggest that the video and audio content is fake.” Since that report, artificial intelligence has advanced, substantially increasing the realism of deepfakes and cutting the resources required to create them.

How do we face this coming tide of disinformation? New technology is part of the picture. In late 2020, the US Defense Advanced Research Projects Agency (DARPA) launched a Media Forensics programme to develop new prototypes to identify and combat falsified media. They use detection algorithms, which analyse media content to determine if manipulation has occurred; and fusion algorithms, which combine information across multiple detectors to create an “integrity score” for each media asset.

But as Nina Schick, author of Deepfakes: The Coming Infocalypse, pointed out recently in Wired magazine, “our crisis of information will not be for technology to solve alone. Any technological solutions will be useless unless we humans are able to adapt to this new environment where fake media is commonplace.” This will require “inoculation” through digital-literacy and awareness training, along with concerted action from government, military and civil groups.

We need to prepare now for this rising threat to our information eco-system. At the moment, it is relatively easy to spot deepfake videos. As the recent video of Zelensky shows, such fakes can be countered with a prompt response. But as technology advances, it is only a matter of time before we will need additional tools and strategies in order to distinguish a deepfake from the real thing.

This piece is from the Witness section of New Humanist summer 2022. Subscribe here.

Join: Albert Einstein & you

The Rationalist Association is independent, irreverent & non-profit. We are supported by our members.