We recently said deepfakes “remain the weapon of choice for malign interference campaigns, troll farms, revenge porn, and occasionally humorous celebrity face-swaps”. Skepticism that these techniques would work on a grand scale such as an election, remains in place. In the realm of malign interference and smaller scale antics, however, deepfakes continue to forge new ground.
It’s one thing to pretend to be anonymous law enforcement operatives at the other end of a web call, with no deepfake involvement. It’s quite another to deepfake the aide of a jailed Russian opposition leader.
Zooming into deepfake territory
Multiple groups of MPs were recently tricked into thinking they were talking to Leonid Volkov, a Russian politician and chief of staff to Alexei Navalny’s 2018 presidential election campaign. Instead, Dutch and Estonian MPs at different meetings were presented with an entirely fictitious entity forged in the deepfake fires. From looking at the various reports on these incidents, we’re not entirely sure if fake Leonid responded to questions or stuck to a pre-written script. We also don’t know if the culprits faked his voice, or spliced real snippets to form sentences. Based on this report, it appears the Zoom call was conversational, but details are sparse. The aim of the game was most likely to have MPs say they want to support Russian opposition with lots of money.
How did this happen?
It appears basic security practices were not followed. Nobody verified it was him beforehand. His email wasn’t pinged, nobody said “Hey there…” on social media. This is rather incredible, considering people doing an Ask Me Anything on Reddit will hold up a “Hi Reddit, it’s me” note as a bare minimum. With such a non-existent security procedure in place, disaster is sure to follow.
One wonders, given the absence of contact with the real Leonid, how fake Leonid had the Zoom sessions arranged in the first place. Can anyone arrange a call with a room of MPs if they claim to be somebody else? Do online meetings regularly take place with no effort to ensure everyone involved is legitimate? This all seems a little bit peculiar and faintly worrying.
Locking down deepfakes: in it for the long haul
Outside the realm of verification-free Zoom calls with parliamentarians, more moves are afoot to detect deepfakes. SONY has stepped into a battleground already populated by DIY tools and researchers trying to fight fakery online. Elsewhere, we have AI generated maps. While this sounds scary, it’s not something we should be panicking over just yet.
Deepfakes continue to become more embedded in public consciousness which can only help raise awareness of the subject. You want some Young Adult fiction about deepfakes? Sure you do! Actors helping to popularise the concept of fake video as something to be expected? Absolutely. Wherever you turn…there it is.
Low-level noise and quiet misdirection
For now, malign interference campaigns and small-scale shenanigans are the continued order of the day. It’s never been more important to take some steps to verify your web-based conversationalists. Whether an AI-generated deepfake or someone with a really convincing wig and fake voice, politicians need to enact some basic verification routines.
The real worry here is that if they fell for this, who knows what else slipped by them via email, social media, or even plain old phonecalls. We have to hope that whatever verification systems are in place for alternate methods of communication among politicians are significantly better than the above.