Picture AI and data privacy as dancers, stepping into an ever-changing rhythm neither entirely understands. This article aims to simplify the choreography, critiquing the gaps in current data protection laws and proposing the need for a new, harmonised framework.

Unpacking the drawbacks of current privacy laws

Current data protection laws, such as the GDPR in the EU, the CCPA in the US and POPIA in South Africa, designed for a less intricate digital landscape, stumble when navigating the complex world of AI. These laws disproportionately focus on how organisations handle data, sidelining the consumer’s right to manage personal information.

Assessing the risks and adverse effects arising from AI

The misuse of AI presents risks, from infringing personal privacy to significant societal disruptions. For instance, AI algorithms in job recruitment can perpetuate gender bias if trained on historically skewed data. In more extreme cases, some governments deploy facial recognition AI for oppressive surveillance, undermining democratic values.

Emphasising the role of user interaction and choice when it comes to AI and data privacy

Transitioning from traditional ‘opt-out’ methods to systems that promote direct user interaction can offer a solution. For example, newer mobile operating systems now ask for user consent on specific types of data collection, enhancing individual control.

Clarifying the need for transparency and intelligibility in AI

Achieving clarity in AI remains a high yet elusive bar. Current tools fall short of providing the deep understanding consumers and regulators require. Without clear insight into how an AI system makes decisions, ensuring it acts in the best interests of society becomes challenging.

Revealing the economic motives and structural issues at play with AI and data privacy

Corporate profit often overshadows the need for individual privacy in data collection. To address this imbalance, systemic changes are essential, harmonising economic incentives with safeguarding consumer privacy.

Scrutinising the limits of risk-based approaches

Although risk-assessment frameworks like those advanced by the European Union and their EU AI Act appear promising, they meet with scepticism. The concern is that they offer a veneer of security without effectively guarding consumer privacy, especially when AI technologies are developing at such a rapid pace.

Holding AI accountable and scrutinising new tools

Accountability in AI extends beyond mere legal compliance; it should also assure the data quality. ‘Model cards’ serve as an example here. These concise documents accompany a machine learning model, outlining its purpose, performance metrics, and known limitations. The term ‘model card’ is used because these summaries act like an identity card for the AI model, providing essential details at a glance. However, the effectiveness of such tools in catalysing meaningful change remains debatable.

Spotlighting public initiatives and trailblazing solutions for AI and data privacy

The public sector has begun taking steps, with government-led initiatives focusing on creating ethical datasets. Emerging solutions like data trusts are also under trial, collectively reflecting a resolve to tackle the AI and data privacy quagmire.

Actions you can take next

The intricate dance between AI and data privacy demands new choreography, not mere tweaks to old steps. The urgency for a comprehensive, collective framework has never been more pressing. You can:

  • Become an active participant in this evolving narrative by joining our Trustworthy AI programme.
  • Back initiatives that aim for AI and data privacy reforms, such as the Future of Privacy Forum.
  • Exercise caution in your online data-sharing practices.