The spread of misinformation and deceptive content is not a new challenge. However, advancements in technology, particularly AI, have amplified this problem, enabling the creation of compelling fake audio and video content known as deepfakes.

As Microsoft co-founder Bill Gates highlights in “Gates Notes”, the implications of AI-generated deepfakes on elections and democracy are profound. This post delves into the risks deepfakes pose to the electoral process. We also explore how AI’s continuous evolution can hinder and help in the fight against this disinformation.

The perils of deepfakes and misinformation in elections

The emergence of AI-generated deepfakes has opened Pandora’s box of potential dangers to the democratic process. These risks include:

  1. Emotional manipulation: The emotional impact of hearing a loved one in distress, as exemplified by a fake voice message claiming a kidnapping, can be severe and lead to hasty and irrational decisions, such as sending ransom money.
  2. Election tampering: Bad actors can weaponise deepfakes to spread false narratives about candidates, casting doubt on the legitimacy of election outcomes and influencing voter behaviour.
  3. Rapid dissemination: In the digital age, fake content can spread like wildfire on social media and other platforms before authorities expose its falsehood, causing widespread damage to public perception and trust.
  4. Last-minute influence: A strategically timed deepfake release on the eve of an election can significantly impact undecided voters and potentially sway the results in favour of one candidate.

Harnessing AI for detecting and combating deepfakes

While deepfakes pose a substantial threat, Gates points out two reasons for cautious optimism:

  1. Human resilience: History has shown that people can adapt and learn to be more discerning information consumers. Over time, email users became less susceptible to scams. Similarly, we can build resilience against deepfakes by cultivating healthy scepticism towards sensational or unverified content.
  2. AI-powered solutions: Interestingly, AI can play a dual role in the battle against deepfakes. On the one hand, it enables the creation of sophisticated fake content. But, on the other hand, it can be harnessed to detect and counter such deceptive materials. For instance:

a. Deepfake detectors: Companies like Intel have already developed AI-based deepfake detectors to identify manipulated audio and video content. These detectors utilise machine learning algorithms to analyse patterns and anomalies in the data, flagging potential deepfakes for further scrutiny.

b. DARPA’s efforts: The US government agency DARPA is actively developing technologies to identify and expose altered video and audio, providing a significant defence against the malicious use of deepfakes.

The path ahead: A cyclical process of innovation and adaptation

Addressing the deepfake challenge is an ongoing and dynamic process. As detection methods evolve, so will the tactics used by those creating fake content.

The battle will be cyclical, as new countermeasures prompt the development of more sophisticated deepfakes and vice versa. While perfection may remain elusive, it’s crucial to recognise that progress is not solely dependent on achieving absolute success. Instead, it’s on building resilience and minimising the impact of misinformation.

End thoughts

At Michalsons, we understand the gravity of this issue and are committed to assisting political parties and organisations in navigating the complexities of AI while upholding democratic principles. It’s one of the reasons we established a robust AI law practice.

By staying ahead of evolving technology and fostering a culture of critical thinking, we can collectively fortify our democracies against the threats posed by deepfakes and misinformation. Together, let us uphold the sanctity of free and fair elections, safeguarding the cornerstone of our society—democracy.