How political deepfakes could decide who wins the general election
The first face you see belongs to a familiar BBC News presenter. She talks in the careful, slightly halting way they do when a big story is breaking live. The prime minister, she explains, has been caught in a financial scandal. He has secretly earned “colossal sums” from a project that was supposed to benefit ordinary Britons, we’re told, as the bulletin cuts to a defiant Rishi Sunak. Standing at a lectern, he gives his version of events.
Except it isn’t Sunak and the BBC newsreader isn’t real either — both are digital clones, created using AI.
The video, which appears to have been made with off-the-shelf software, is one of scores that follow the same basic template — BBC News packages presenting Sunak as mired in controversy — that have circulated on Facebook in recent weeks. The clips were identified by Marcus Beard, 30, a disinformation expert, who says they mark a shift in the sophistication of the bogus content being unleashed on the British public.
If experts are right, it’s just a taste of what’s to come. America, Britain, India and more than 60 other countries will hold national elections in the coming year. More people will vote — an estimated two billion — than at any other time in history. And as they head to the polls, they’re set to be targeted with an equally unprecedented barrage of AI-generated fake news. Beard believes that, if urgent action isn’t taken, democracy could be derailed by deepfakery.
“We need to stop thinking about this as ‘something that could happen’ during the election,” he says. “It’s already here.”
A former civil servant, he previously led the Cabinet Office’s efforts to monitor and respond to Covid disinformation during the pandemic. The aim was to debunk and rebut conspiracy theories and posts touting dangerous quack cures. “I think we had an impact; I can’t claim we solved the problem,” he says.
Having now founded his own consultancy called Fenimore Harper, he spends much of his time hunched over a MacBook in his office in a gentrified enclave of Haggerston in east London, building tools designed to hunt out deceptive material created by “generative” AI. Energetic and unassuming, he insists that, overall, he’s an optimist. But he’s also anxious that a slew of warnings issued over the past year — by ministers, watchdogs, think tanks, intelligence agencies and figures at some of the biggest technology companies — haven’t been acted on. The volume of fake political content being distributed on social media is growing by the day, he says — even as big social media platforms scale back measures to prevent it.
An early glimpse of what the age of digital disinformation would look like came in 2017, via the Reddit social media site. A user calling himself “deepfakes” posted videos where he’d used AI to transpose the faces of Hollywood actresses — the Israeli star Gal Gadot was an early victim — onto the bodies of porn actresses.
A few months later, a clip emerged of Barack Obama calling Donald Trump “a total and complete dipshit”. The curtain was pulled back at the end of the video: AI had been used to turn Obama into a digital puppet and the comedian Jordan Peele supplied the voice. At the time it was described as “the deepest fake of them all”. Today, it looks quaint.
Because since then we’ve hurtled into what the American journalist Ezra Klein has called the era of “zero-cost bullshit”, where powerful new AI tools capable of immeasurably more convincing results have become much easier to use and far cheaper to access. Audio deepfakes, in particular, are now scarily good. Record a person speaking for one minute and you have enough to clone their voice. A leading provider of this kind of service is offering a special deal: subscribe for $1 a month.
Visual fakes are trickier and the results less polished. But they’re advancing fast and it’s already easy to imitate a well-known figure. The journalist Danae Mercer Ricci recently took to Instagram to demonstrate an AI phone filter that gave her, in real time, Taylor Swift’s face. “This is incredibly terrifying to me,” she said.
Perhaps, she conceded, the effect wasn’t entirely perfect (though viewed on a smartphone screen, it was very, very good). But as she says, these kinds of tools are set to become “exponentially smarter… So we need to be informed. We need to be aware and we need to figure out ways to protect our [children] — because they are growing up in a strange, strange world.”
As the volume of high-quality bogus content increases, Sir Robert Buckland, the former justice secretary, is among those concerned about the so-called “liar’s dividend” — the idea that if this assault on the truth is corrosive enough, then we’ll cease trusting anything. “The liar’s dividend is the whirlwind we’ll reap if we’re casual about the need to preserve the truth,” he says.
Alastair Campbell, the former spin doctor to Tony Blair who now co-hosts The Rest Is Politics podcast, suggests another unsettling scenario: what if the people entrusted to protect democracy decide instead to capitalise on AI disinformation? Technology, and its abuse by bad actors, has already played a role in elections, he says. And generative AI systems, which can churn out flowing, lucid prose, audio fakes and realistic images, mean it’s increasingly easy to be duped by a machine. “The issues [this] gives rise to will only be addressed if politicians and the big tech companies decide it is a priority for them,” Campbell says.
“But many of the democracies already have political and media systems in which some of the key players are more likely to exploit the new methods that technology affords than to call them out and challenge them.
“Does anyone have any doubt what Donald Trump would do, if given a choice between using something that could help him electorally or dismissing it because it is wrong?”
In 2020, the US presidential election was decided in three of the swing states — Arizona, Wisconsin and Georgia — by a total of less than 45,000 votes. Beard, together with many other analysts, thinks that a well-timed deepfake could influence a knife-edge race. We’ve already had a glimpse of what this might look like: last month, households in New Hampshire — an important early state in the US presidential primary process – received robocalls from an audio deepfake voice that sounded just like Biden, urging them not to vote.
Similar potshots have been made against UK democracy. On the eve ofthe Labour Party conference in October another audio deepfake emerged, of Sir Keir Starmer launching a foul-tirade against members of his staff. It was quickly exposed as phoney, but within hours it clocked up more than a million hits and Buckland admits that he was shocked at how well the AI mimicked the leader of the opposition. “When you listen to it, you think, ‘That does really sound like his intonation and the way he talks,’ ” he says. “To learn fairly quickly that it’s a deepfake is frightening.”
And if you only need a minute of somebody speaking to replicate their voice, the victims won’t be limited to politicians or celebrities. “Deepfakes could ruin and damage the lives of millions of us if we’re not careful,” Buckland adds.
At least one journalist for a national paper shared the Starmer audio as if it were genuine, and among those who wanted the lie to be real it gained traction. Skwarkbox, a socialist website, described how “the Labour Party has failed to deny the authenticity” of the recording. Another left-wing site, Vox Political, claimed that a “check on whether the file was computer-generated gives a more than 90 per cent probability that the voice on the recording is human”.
The following month, another piece of fake audio surfaced. It mimicked Sadiq Khan, the London mayor, who seemed to say, “I don’t give a flying shit about the Remembrance weekend.” It was shared across far-right websites. The Met Police investigated complaints before concluding that there was no criminal offence, a decision that frustrated MPs but which came as little surprise to legal experts, who say UK legislation is failing to keep up with the AI threat.
More recently, the video clips identified by Beard featuring a fake Sunak in a bogus BBC News bulletin were posted on Facebook, where they reached as many as 400,000 people, despite them seeming to break several of the platform’s policies. Up to about £13,000 was spent to circulate 143 of them on Facebook’s advertising network.
It seems unlikely that anybody who scrutinised the Sunak scandal clips closely would be duped. Text appears under the video in a style that you wouldn’t see on the BBC. As the videos progress, the content makes less and less sense. Listen to the whole thing and it’s clear that it’s a scam. The fraudsters behind it — Beard suspects that they are based in India — want you to contact them, via a spoofed BBC News page, to swindle you out of money.
But even Beard, an expert immersed in the world of deepfakes, was unnerved by how convincing the Sunak voice is. One danger, he thinks, is that a drip-feed of deepfakes will fuel voter apathy, even if we quickly skip past them on our content feeds. “Think of how many people will only have watched the first three or four seconds of these videos — where a BBC News presenter says Rishi Sunak is involved in a financial scandal,” he says.
So could a deepfake actually sway a national election? It is impossible to know for sure, but there’s a chance that one already has. Two days before Slovakia went to the polls last year, a piece of audio surfaced on Facebook. Monika Todova, a prominent journalist in the country, would later describe to The Times the alarm she felt when she first encountered it — because she heard herself having a conversation with Michal Simecka, a leading liberal politician, that had never really happened.
Todova and Simecka appeared to be discussing a plot to rig the election, partly by buying votes from Slovakia’s Roma minority. The audio was quickly identified as fake, with the news agency AFP finding signs of AI manipulation. But it had been released during a 48-hour blackout period that precedes polling in Slovakia, in which the media and political candidates are supposed to stay silent. That made it hard to debunk, even though hundreds of thousands of voters heard it.
Whoever created it also benefited from a Facebook policy where, remarkably, only faked videos — and not audio clips — went against its rules. Simecka ultimately lost a tight vote to a pro-Kremlin populist. As Tom Tugendhat, the UK security minister, said in a recent speech, “Who knows how many votes it changed – or how many were convinced not to vote at all?”
How the UK would respond to a similar incident would depend, in part, on who was behind it. The National Security Act makes it an offence for a foreign state to spread disinformation, but it wouldn’t apply to a British individual with no ties to an overseas power. It is illegal to make false claims about a candidate’s character or conduct to affect electoral outcomes — but not to use AI-generated content to spread lies about their policies or opinions, says Henry Parker, who works for Logically, a company that works on combating harmful content.
It’s also illegal to influence another person’s vote by threatening their reputation, and the Online Safety Act makes it an offence to spread fake material with the intent of causing physical or psychological harm. But it’s hard to see how those laws would apply to political disinformation and, in Parker’s view, our legislation seems to leave plenty of room for deepfakes — including bogus audio of politicians’ voices — to cause trouble. There is a kind of safety net: Ofcom, the communications regulator, has the power to force social media companies to ensure that their users stick to their terms and conditions.
But the expert consensus is that if the government has been slow to respond to the AI threat, then Silicon Valley also isn’t doing enough. On a recent visit to the UK, Jim Steyer, the founder of Common Sense Media, America’s biggest tech advocacy group, said he was “deeply concerned” at how both X (formerly Twitter) and Meta, which owns Facebook and Instagram, appear to have scaled back their focus on stamping down on harmful material. “It’s a really troubling trend,” he says. “In this year of major elections in the UK and the United States and elsewhere, some of the biggest social media platforms are gutting their content moderation teams.”
X didn’t reply to an email asking it to respond to Steyer’s comments. (A policy that bans its users from sharing “synthetic, manipulated or out-of-context media that may deceive or confuse people” didn’t stop deepfake pornography involving Taylor Swift going viral on the platform last month.) A Meta spokesman said that it had invested more than $20 billion (£16 billion) “to enhance safety and security” since 2016 and that it had hundreds of people “dedicated to elections” — but they didn’t catch the Sunak deepfakes identified by Beard.
In Steyer’s view, we failed to regulate social media effectively and to anticipate the impact it would have, especially on children — and now we’re repeating that with AI. He wants social media companies to face steep financial penalties for spreading disinformation. “We need guardrails, now,” he says.
Jan Beyer, a disinformation expert at Democracy Reporting International, a think tank, says it helps to realise that two sets of AI algorithms are at work. One of them creates the deepfakes; the other – managed by the social media companies — decides who sees them. When we log on to X or Instagram or TikTok, we’re essentially outsourcing our attentiveness. “The platforms are in control,” says Beyer. “They can decide whether a piece of content goes viral or not.” If they see a deepfake surfacing at a strategically sensitive time, such as the eve of an election, “They have a responsibility to act,” he argues.
Meanwhile, the National Cyber Security Centre (NCSC), an arm of GCHQ, is among those braced for a digital disinformation arms race that will extend far beyond this year’s elections. In the future, “AI-created hyper-realistic bots will make the spread of disinformation easier,” it recently warned.
Today’s deepfakes will soon seem crude, says Beyer. Imagine, for example,a future campaign where many millions of voters are targeted with highly personalised disinformation on the morning of the vote — where powerful AI has been used to craft messages that appeal to the prejudices and mores of each individual, inferred by analysing their online activity, and where another AI has hacked into the defence systems of big social media platforms, taking over hundreds of influential accounts.
Sam Altman, the CEO of OpenAI, the company behind ChatGPT, has already suggested that AI could be put to these kinds of ends. “The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern,” he told the US Congress last year.
Speaking at a science conference in Switzerland, the historian Yuval Noah Harari picked up on the same theme. When it comes to regulating AI, the stakes could not be higher, he argued. “Contrary to what some conspiracy theories assume, you don’t really need to implant chips in people’s brains in order to control them or to manipulate them,” he said.
“For thousands of years, prophets and poets and politicians have used language and storytelling in order to manipulate and to control people and to reshape society. Now AI is likely to be able to do it. And once it can… it doesn’t need to send killer robots to shoot us. It can get humans to pull the trigger.”