Last week, a draft AI regulation on the EU’s approach to regulating the technology appeared on the web. The regulation signals how the EU will approach regulating AI in the future. At first glance, it’s a dense read: 81 pages, 92 recitals, 69 articles, and 8 annexes. Naturally, Twitter went wild, saying it’s too long to read.

You’re probably wondering if it will affect you. Hell, the world’s sitting at the edge of their seats. More regulation usually means spending more money on compliance efforts. Luckily, we have the tea. We’ve read the regulation and captured its gist in this post.

What’s the regulation about?

The draft AI regulation applies to the development, deployment, and use of AI in the EU or when it will affect people in the EU.

The EU’s showing its commitment to protecting its people from harmful AI. But, it’s also encouraging innovation where AI will be beneficial to humanity. It’s doing so by setting up an AI regulatory sandbox (article 44). The sandbox refers to a controlled environment within which you could develop AI.

Does the draft AI regulation apply to me?

The proposed law will apply to:

  • AI providers who deploy AI in EU;
  • AI users in the EU;
  • AI providers and users that are outside the EU but their AI affects EU persons;
  • AI importers and distributors; and
  • EU institutions, offices, bodies, and agencies.

Strikingly, the law doesn’t apply to “AI systems exclusively used for the operations of weapons or other military systems” (article 2(4)).

The law isn’t clear on whether only EU states can use AI that operates weapons or if people and organisations can do the same for private use.

What AI is banned?

The draft AI regulations ban specific AI. The reason is that this AI goes against the EU’s values or violates fundamental rights. The banned technology includes AI that:

  1. Manipulates human behaviour, opinions, or decisions, causing a person to take action to their detriment.
  2. Exploits information or predictions about a person or class of persons to target their vulnerabilities or special circumstances resulting in a person taking action to their detriment.
  3. Allows bulk surveillance.
  4. Enables general-purpose social scoring of humans, which leads to systematic or targetted detrimental treatment of humans.

‘Social scoring’ refers to the event where an algorithm would gauge your behaviour and trustworthiness as a member of society. It would use a combination of data sources to determine your social score. These data sources include your credit score, whether you pay your traffic fines, and if you’re caught jaywalking. If your social score were low, then the government could suspend some of your rights, e.g. prevent you from travelling or buying property.

However, the practices in points 1 to 3 above will be allowed if an EU member state law authorises them. But, the member state needs to create the law to safeguard public security and have appropriate safeguards for fundamental rights.

A risk-based approach

The draft AI regulation adopts a risk-based approach. It speaks about ‘high-risk’ and ‘other’ AI.

High-risk AI

High-risk AI means AI that will form the safety components of products or AI listed in Annex II. The AI listed in Annex II must undergo a third-party conformity assessment or self-assessment of conformity. In brief, conformity assessments set out standards with which AI needs to comply for you to develop, deploy, or use it.

  • Third-party conformity assessment. This assessment applies to AI for remote biometric identification of persons in public spaces and public infrastructure networks (e.g. electricity grids, water distribution).
  • Self-assessment of conformity. This assessment applies to AI for emergency response services, processing applications to educational institutions, proctoring exams, recruitment, personnel promotion and termination, creditworthiness, evaluation of public benefits (e.g. social grants), lie-detection, crime prediction, processing asylum and visa applications, and assisting judges at courts.

Plus, article 8 contains comprehensive standards that input data (the data that trains AI) needs to meet. The main aim is to avoid algorithmic bias, i.e. algorithms that unfairly disadvantage or privilege people or algorithms that discriminate against people. Another objective is to prevent AI misuse.

Many of these standards echo international information security standards. Also, they require humans affected by AI to be a central part of the AI lifecycle.

Notably, there will be a database on which you would need to register your high-risk AI.

Other AI

When it comes to the less risky AI, you’ll need to meet ‘transparency obligations’. In particular, humans need to know when they’re:

  • interacting with AI;
  • exposed to an emotion-recognition system or categorisation system that uses their personal data; and
  • interacting with deepfakes (synthetic media in which someone else replaces a person in an existing image or video).

Will I get in trouble for non-compliance?

Yes! You could be fined up to 4% of your global annual turnover or €20 million, whichever amount is higher.

Doing the following will attract a fine:

  • developing, deploying, or using banned AI;
  • supplying incorrect, incomplete, or false information to the relevant bodies; or
  • not cooperating with the relevant authorities.

Now what?

Take a deep breath and log in to your Headspace app.

We recognise that the draft AI regulation raises many questions. But it also suggests something exciting—the EU’s taking a proactive approach to creating a future with AI we can trust.

How can you prepare for a future with regulated AI? We suggest you stay updated with the latest AI regulation news. Do so by subscribing to our newsletter.

Download the draft AI regulation