As voters around the world prepare to vote in the upcoming general election, a powerful new form of artificial intelligence known as ‘deepfakes’ is threatening to shake up political discourse and sow mass confusion and distrust.
Deepfakes, highly realistic fake videos or audio recordings created by machine learning algorithms, have rapidly grown in sophistication to the point where they are indistinguishable from real content to the untrained eye.
While this technology could have smooth applications in entertainment and the arts, it could also potentially be weaponized for political trickery and propaganda ahead of crucial elections in the United States, Brazil, Nigeria, Taiwan and other democracies over the next two years. age.
“We are on the verge of a perfect storm,” warns Wilson Standish, director of the Digital Forensics Institute at the Atlantic Council think tank. “This technology is incredibly advanced while still being accessible to anyone with a decent computer. Meanwhile, public trust in institutions and the media is at an all-time low, and foreign adversaries are eager to exploit our departments. Put this all together and you have a recipe for large-scale mayhem.”
“We saw what happened in 2016 with the WikiLeaks dump and ‘fake news,’” said Sandra Marling, a fellow at the Harvard Belfer Center. “It happened with a simple bot network,” he says. “Imagine dialing 11. You could see a fake video of a candidate spewing racial slurs, planning terrorist attacks, accepting bribes, anything you want, and most people wouldn’t be able to tell the difference. .”[1]
Marling argues that the bigger threat is not vote change but voter confusion, suppression, and apathy. “If people no longer know what’s real and have lost all trust, they may turn away and not care about voting. “This is what the enemies of democracy want.”
The United States is not alone in facing this threat. In Brazil, experts fear a repeat of the rampant disinformation and conspiracy theories that marred the 2022 election, this time with even more convincing fakes. “Last time there was barely enough disruption and the president publicly attacked the election system itself for being rigged,” said Paulo Xavier, director of Brazilian fact-checking group Aos Fatos. “Now lies will be prettier and more viral than ever.”
Across the Atlantic, the 2023 presidential election, which plunged Nigeria into violence, provides a grim preview of what will happen in the upcoming 2027 election. Deepfake videos showing both leading candidates threatening voters and using hate speech circulated on WhatsApp in the final weeks of the campaign. “Technology is a force that amplifies division and hatred,” said Aminu Sadiq, a political science professor at the University of Lagos. “In countries with significant polarization and low trust, the potential for serious violence is enormous.”[2]
In Taiwan, officials are on high alert over a surge in deepfakes and other disinformation from mainland China ahead of the 2024 presidential election. “The Chinese Communist Party has already deployed crude deepfakes targeting Taiwan, and we expect much more sophisticated attacks this time,” warned Ting-Yu Chen of the Taiwan Fact-Check Center in Taipei.[3] The fakery could take the form of Taiwanese politicians surrendering to China or collaborating with China.
But the potential threat extends beyond borders and democratic competition. Terrorist groups like the Islamic State (IS) are experimenting with deepfakes to expand their influence and inspire domestic extremists abroad. “You could make fake videos of ‘lone wolves’ carrying out attacks in Western cities, or you could make deepfake videos of politicians insulting prophets,” said Samira Haddad, a Berlin-based extremism researcher. “Groups have already been using basic fakery to recruit and incite, so embracing it is inevitable.”[4]
Others worry that deepfakes could fuel a nuclear crisis amid the tense standoff. Vincent Wu, an arms control expert at the Asia Institute in Singapore, presents a surprising scenario. “Imagine a deepfake video showing Kim Jong-un declaring a missile strike on Seoul or Narendra Modi announcing an imminent attack on Pakistan. “Amid the confusion and panic, nuclear powers could misunderstand this as a true preemptive strike and retaliate in kind, with disastrous results.”
So what can be done to address these storms? Experts say there is no panacea, but they suggest a variety of measures that could help mitigate the damage.
The most urgent priority is to increase digital media literacy among the global public to think more critically when consuming online content. “Just as we teach children to question strangers who give them candy, we need people to reflexively question shocking political videos that are too good (or bad) to be true,” he says. michael Baron, He is an advisor to the European Union’s East StratCom Task Force, which combats Russian disinformation. “You don’t have to be a pixel master to spot fakes. Just pay attention to red flags like unnatural speech patterns, blurring where details are expected, misaligned head movements and shadows.”[5]
Tech companies also have a critical role to play in improving their ability to quickly detect and remove deepfakes from their platforms, while avoiding playing a game of whack-a-mole. Facebook, Twitter, and Google have all released public deepfake datasets to help train AI-based screening tools and have pledged information-sharing partnerships with governments and academic institutions.
But the Atlantic Council’s Standish argues the platform needs to go further. “We need to be much more transparent about how platforms identify fakes in real time and what their criteria for removal are. “Right now it’s reasonable to be skeptical that they will fall short in the heat of the moment.”
Digital forensics researchers from academia, media, and cybersecurity companies are racing to develop automated detection systems to uncover tell-tale artifacts about the origins of deepfakes. Although counterfeiters currently have the upper hand, promising breakthroughs continue to emerge. The UC Berkeley team recently released a detection model that boasts 97% accuracy.[6] But experts warn that it will ultimately become an arms race, as counterfeiters will inevitably leverage the same machine learning techniques to evade screening.
Policymakers also have ways to shape the legal and normative environment regarding synthetic media. More and more countries have passed laws making malicious deepfakes a crime, and in countries like China, South Korea, and India, they can be punishable by years in prison. In the United States, several state laws ban deepfake pornography, and a proposal from Senator Marco Rubio would impose sanctions on foreign individuals or entities peddling election-related deepfakes.[7] But civil liberties advocates warn that overly broad laws could ensnare legal media and stifle artistic expression.
Norm-setting bodies such as the Paris Demand for Trust and Security in Cyberspace, which has the consent of more than 500 governments and companies, are also working to brand deepfakes as unacceptable election interference on par with ballot box stuffing. walk. “We want governments to commit not to use deepfakes in each other’s elections as a trust-building measure, while also not hindering media debate and public research into the technology itself,” said Alexander Klimburg, director of the Global Council on Cyberspace Stability. explains:[8]
But ultimately, democratic societies will have to grapple with the grim reality that we live in a world where what you see is not necessarily what you believe. We may never be able to eradicate deepfakes, but we can develop the resilience and wisdom to pause before amplifying content that seems too special.
“In a world where the lines between what is real and what is made up are blurring beyond recognition, it is up to all of us – journalists, leaders, educators and citizens – to defend facts and truth,” said Marling of the Belfer Center. “It is,” he claims. “We either learn to navigate that wilderness together, or we let the fabric of reality shatter before our eyes. “The fight for democracy has entered a new phase and we must all rise to the occasion.”