Listen to this Post
2025-01-06
The rise of generative AI (GenAI) tools sparked widespread fears that the 2024 elections worldwide would be plagued by deepfakes and AI-generated disinformation, potentially swaying voter outcomes and undermining democratic processes. However, recent studies and reports reveal a different story: despite the proliferation of AI tools, their impact on key elections in the U.S., UK, France, and beyond has been minimal. This article explores why AI deepfakes failed to disrupt elections in 2024 and what this means for the future of democracy in the age of artificial intelligence.
—
The Reality of AI in Elections
In early 2024, voters in New Hampshire received a phone call that appeared to be from U.S. President Joe Biden, urging them not to vote in the primary. The call, however, was a deepfakeāa near-perfect AI-generated imitation of Bidenās voice. This incident, along with the growing accessibility of AI tools, fueled concerns that the 2024 elections would be inundated with fake content, blurring the lines between truth and fiction.
Yet, as the year unfolded, these fears largely went unrealized. Studies from organizations like the Alan Turing Instituteās Centre for Technology and Security (CETaS) found that AI-generated disinformation played a marginal role in election outcomes. For instance, only 27 AI-driven disinformation campaigns were identified in the UK, French, and European Parliament elections, with no evidence of significant influence on voter behavior. Similarly, in the U.S., AI-generated content accounted for just 5.5% of fake content shared during the election cycle, according to the News Literacy Project.
Public awareness of deepfakes was high, with 94.3% of UK respondents expressing concern about the issue. However, exposure to harmful AI-generated content, including political propaganda, was limited. Only 5.7% of Britons reported encountering political deepfakes, and similar trends were observed in the U.S. and other regions.
—
The Crude Nature of AI Content
Much of the AI-generated content during the 2024 elections was described as “very crude,” often bearing identifiable marks or logos from the tools used to create it. This suggests that most of it was produced by amateurs rather than sophisticated campaigns. Examples included a deepfake image of Kamala Harris shaking hands with Stalin, a fake video of her addressing a communist rally, and an AI-generated image of a girl eating cockroach pizza as a protest against EU policies.
A Purdue University study found that 36.4% of AI-generated content was created for satire or entertainment, while only 24.2% was aimed at disinformation or political manipulation. This indicates that while AI tools are widely available, their use for malicious purposes remains limited.
—
The Real Threat: Mislabeling Real Content as Fake
Interestingly, a more troubling trend emerged: attempts to discredit real content by falsely labeling it as AI-generated. Researchers from the Institute for Strategic Dialogue (ISD) found that 52% of users on platforms like X, YouTube, and Reddit struggled to accurately identify the source of content, often mistaking real content for AI-generated fakes. This confusion undermines trust in online information and poses a significant challenge for democratic discourse.
A Pew Research Center survey in September 2024 revealed that 52% of Americans found it difficult to distinguish fact from fiction in election-related news. While this represents a slight improvement from 2020, it highlights the ongoing struggle to navigate an increasingly complex information landscape.
—
What Undercode Say:
The 2024 elections have provided valuable insights into the role of AI in shaping democratic processes. While fears of widespread deepfake-driven disinformation were largely unfounded, the mere awareness of AIās potential has had significant implications.
1. Limited Impact of AI-Generated Content: The minimal influence of AI deepfakes on election outcomes suggests that current safeguards, public awareness, and the crude nature of most AI content have collectively mitigated the threat. However, this does not mean the risk has been eliminated. As AI tools become more sophisticated, the potential for harm increases.
2. Public Awareness as a Double-Edged Sword: High public awareness of deepfakes has likely reduced their impact, but it has also contributed to a climate of suspicion and mistrust. The tendency to label real content as fake underscores the need for better media literacy and verification tools.
3. The Broader Implications for Democracy: AI-generated content has broader implications beyond elections. It can be used to promote hate speech, endanger the safety of political figures, and encourage unethical practices in political campaigns. The lack of clear disclosure when politicians use AI in campaigns is particularly concerning, as it sets a dangerous precedent for future elections.
4. The Role of Platforms and Policymakers: Social media platforms and policymakers must take proactive steps to address these challenges. This includes developing robust detection tools, promoting media literacy, and establishing clear guidelines for the use of AI in political campaigns.
5. A Call for Ethical AI Practices: The 2024 elections highlight the need for ethical AI practices and greater transparency in the use of AI tools. As technology continues to evolve, so too must our approach to ensuring its responsible use in democratic processes.
In conclusion, while AI deepfakes failed to disrupt the 2024 elections, their potential to undermine trust and distort reality remains a significant concern. The lessons learned from this yearās elections must inform future efforts to safeguard democracy in the age of artificial intelligence.
References:
Reported By: Calcalistech.com
https://www.quora.com/topic/Technology
Wikipedia: https://www.wikipedia.org
Undercode AI: https://ai.undercodetesting.com
Image Source:
OpenAI: https://craiyon.com
Undercode AI DI v2: https://ai.undercode.help