As the 2024 U.S. presidential election approaches, the growing threat of deepfakes—digitally manipulated content designed to deceive—is raising alarm among election officials, political campaigns, and voters. These AI-generated videos, audio, and images represent a new frontier in disinformation, capable of deceiving voters, disrupting campaigns, and casting doubt on election integrity. With deepfake technology advancing and becoming more accessible, safeguarding trust in the democratic process is more critical than ever.
From fake speeches by candidates to fabricated footage of voter fraud, deepfakes could be used to undermine public confidence in the electoral system. This is not merely theoretical; deepfakes have already appeared in political contexts, and in 2024, they could significantly influence voter perception and turnout. Understanding how these forgeries could be weaponized—and how the political system is preparing to address them—is essential to protect the integrity of the election.
Deepfakes in Political Campaigns
Deepfakes present a significant risk to how voters perceive candidates and their platforms. During the 2020 election, misinformation and manipulated media were already widespread, but deepfakes take this threat further by creating convincing, fictitious content. Imagine a viral video in the days leading up to the election showing a candidate making offensive remarks. Even if debunked, the damage to their reputation could be lasting.
Earlier in 2024, deepfake phone calls, purportedly from President Joe Biden, urged New Hampshire voters not to participate in the primary. Though quickly exposed as fake, this incident highlights how easily deepfakes can manipulate voter behavior. Combined with the speed of social media, these tactics could shape narratives that are difficult to reverse, especially in a close race where even a small shift in voter opinion can affect the outcome.
Undermining Trust in Election Results
While deepfakes pose a danger during campaigns, their impact could be even greater after Election Day. In the 2020 election, unfounded claims of voter fraud fueled movements like “Stop the Steal.” Deepfakes could escalate such efforts in 2024, with fabricated videos or audio being used to falsely “prove” election fraud. For example, deepfakes showing poll workers tampering with ballots could quickly spread online, sowing distrust in the election results.
Election officials may not be able to verify the authenticity of deepfakes quickly enough, and by the time they do, the damage may already be done. Experts warn that while deepfakes are unlikely to alter vote counts directly, their real danger lies in undermining trust in the election process. This could lead to disputes and protests, particularly in a tight race, with deepfakes fueling conspiracy theories.
A New Tool for Election Manipulation
Foreign interference in U.S. elections is not new. In 2016, Russia’s disinformation campaign revealed vulnerabilities in the system, using social media to influence public opinion. In 2024, foreign actors like Russia, China, and Iran may have a new weapon: deepfakes. These countries have been implicated in past attempts to meddle in U.S. elections, and deepfake technology provides them with a more sophisticated tool to destabilize the political system.
For example, a deepfake video showing a U.S. presidential candidate in a compromising situation, released close to Election Day, could influence the race. Even if quickly debunked, the video could create confusion and doubt, particularly in battleground states where margins are slim. Foreign entities could also use deepfakes to deepen divisions within the U.S., exploiting existing political rifts to erode confidence in election outcomes.
Can the System Keep Up?
While the threat of deepfakes is clear, the U.S. election system is largely unprepared to address this type of disinformation. In previous elections, misinformation was often debunked only after it had already spread widely. Election officials worry that current tools to detect deepfakes are insufficient, especially given how quickly fake content can go viral.
Some efforts are underway to combat deepfakes. Major tech companies like Google, Meta, and OpenAI are developing tools to detect AI-generated content, and there have been proposals to implement digital watermarks or cryptographic signatures to verify the authenticity of political media. However, these solutions are still in their infancy, and detection technology often lags behind the creation of deepfakes. Even when deepfakes are identified, the damage to public trust may already be irreparable.
In the absence of a foolproof system to detect and combat deepfakes, public awareness will be critical. Voters must learn to approach online content with skepticism, particularly election-related material. Fact-checking by media organizations and independent watchdogs will also be essential in countering the spread of disinformation.
A New Era of Election Manipulation
The 2024 U.S. election faces unprecedented challenges from deepfakes. These AI-generated forgeries could be used to sway voter opinion, disrupt campaigns, and undermine trust in the electoral process. As political actors—both domestic and foreign—exploit this technology, the risks to democracy increase. Although efforts to detect and counter deepfakes are progressing, time is of the essence.
Ultimately, the integrity of the 2024 election may depend on more than just the actions of election officials and tech companies. Public vigilance will be crucial in navigating this new era of disinformation. As the line between reality and fabrication blurs, safeguarding democracy will require a concerted effort from all corners of society.