If the ability to change is the measure of intelligence, the ever-evolving landscape of artificial intelligence (AI) seems to push that idea to the maximum. The possibilities appear boundless, from the current state of generative AI to the prospect of artificial general intelligence one day. However, with great power comes great responsibility embodied in the EU’s proposed AI Act. This monumental legislation could redefine AI’s role across critical infrastructure, recruitment, healthcare, media, customer service, gaming, advertising, marketing, and communication. In this article, we venture into an analytical journey through these case studies, illuminating how the EU AI Act could revolutionise AI application usage and development within the EU and beyond its borders.

Decoding the EU AI Act

The EU AI Act, proposed in April 2021, is nearing fruition, striving to demystify AI by categorising applications into four risk levels. With the Act’s introduction, the voices of key stakeholders, from EU Vice President Margaret Wester to tech giants such as Meta (formerly Facebook) and Google, resonate in the arena of public opinion. The legislation presents an intriguing blend of support and scepticism, revealing the complexities of AI governance.

AI case studies under the EU AI Act

The EU’s regulatory framework addresses the potential harms and benefits of artificial intelligence applications across various sectors by dividing them into four categories: unacceptable, high-risk, limited-risk and minimal-risk. Let’s have a closer look at each one.

Unacceptable-risk AI

The EU AI Act draws a firm line in the sand for unacceptable-risk AI applications like social credit systems and mass surveillance. Any potential breach of fundamental rights is unacceptable, signalling a strict stance against AI’s misuse.

High-risk AI

High-risk AI, in the spotlight for potential harm to health, safety, or fundamental rights, encompasses critical infrastructure and recruitment sectors. An Italian court’s ruling against Deliveroo’s algorithm for discriminatory practices is a potent reminder of AI’s power – and the perils of misuse. The Act’s comprehensive risk-based approach to regulation and compliance could reshape the terrain for companies leveraging high-risk AI in the EU market, with implications spanning medical devices to surgical robots.

Limited-risk AI

The concept of ‘truth’ is tricky in AI, where we see the rise of deepfakes and chatbots. Transparency requirements will compel these AI systems to reveal when interacting with humans, affecting AI developers and users. This could promote trust and potentially limit the spread of disinformation.

Minimal-risk AI

At the other end of the spectrum, we have AI applications like personalised news feeds, music recommendations, or customer service chatbots that fall under minimal-risk AI. Although essentially free from the new regulations, keeping a watchful eye on their future development and integration is imperative, safeguarding user interests.

Global implications of the EU AI Act

Beyond the EU, the proposed AI Act could carry far-reaching extraterritorial impacts. Challenges loom as the EU leads the way in AI regulation, from harmonising international standards to shaping global AI governance. The Act’s influence could potentially catalyse other nations towards a more standardised approach to AI regulation.

Impact of EU AI Act categories

Navigating the diverse landscape of AI categorisations under the EU AI Act, we witness its profound potential to reshape AI usage and development. From unacceptable-risk social credit systems to high-risk medical applications and limited-risk deep fakes to minimal-risk gaming, the Act’s impact is poised to ripple across the globe, beyond EU borders. It’s a testament to the balance of fostering innovation while safeguarding rights.

Actions you can take next

  • Enhance your understanding of the EU AI Act by delving deeper into the legislation.
  • Explore the potential impacts on your industry by staying informed about the developments in AI regulation through our newsletter.
  • Start a conversation about AI governance in your organisation and ask us to help you with AI governance.