Yesterday, the European Commission (the executive branch of the EU) published its proposed AI regulations and plan. Together, they alert the world to the EU’s approach to regulating AI and implementing the legal framework. They also lay the foundation for AI humans can trust.
As proud AI nerds, we’re excited to see the world’s first AI legal framework. We’re not the only ones. Many nations are studying this framework so that they can follow suit. Plus, several businesses are eagerly waiting to know how much money they will need to spend to comply with the AI rules.
We’ve read everything to help you make sense of the legal text. This post captures the essence of the world’s first AI rules and actions.
Does it apply to me?
The legal framework applies if you develop, deploy or use AI in the EU. It doesn’t matter whether you live inside or outside the EU. However, it doesn’t apply when you use AI privately and non-professionally.
AI regulation for Trustworthy AI
The Commission proposes a risk-based approach with four tiers of risk.
1. Unacceptable risk (banned AI)
The proposed AI regulation bans specific AI. The reason is that this AI goes against the EU’s values or violates fundamental rights. The banned technology includes AI that:
- Manipulates human behaviour, opinions, or decisions, causing a person to take action to their detriment.
- Exploits information or predictions about a person or class of persons to target their vulnerabilities or special circumstances resulting in a person taking action to their detriment.
- Uses ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, subject to a few exceptions.
- Enables general-purpose social scoring of humans, which leads to systematic or targetted detrimental treatment of humans.
‘Social scoring’ refers to the event where an algorithm would gauge your behaviour and trustworthiness as a member of society. It would use a combination of data sources to determine your social score. These data sources include your credit score, whether you pay your traffic fines, and if you’re caught jaywalking. If your social score were low, then the government could suspend some of your rights, e.g. prevent you from travelling or buying property.
2. High risk
This AI is high-risk because it creates an adverse impact on people’s safety or their fundamental rights. Examples of high-risk AI include AI used in:
- Critical infrastructures. They could put the lives and health of citizens at risk.
- Educational or vocational training. They may determine access to education and professional course of someone’s professional life, e.g. admitting someone to study or proctor or scoring their exams.
- Safety components of products. For example, these AI could operate the equipment in robot-assisted surgery.
- Employment, workers management and access to self-employment. Examples include recruitment and employment compatibility software.
- Essential private and public services. Consider the example of AI used for credit scoring or the granting of loans or insurance.
- Law enforcement. Law enforcement may use AI to determine whether a crime has taken place.
- Migration, asylum and border control management. AI may verify the travel or identity documents of people.
- Administration of justice and democratic processes. For example, judges may use AI to support their decisions.
If you develop, deploy, or use high-risk AI, you need to comply with the following mandatory requirements:
- the quality of data sets used;
- technical documentation and record keeping;
- transparency and the provision of information to users;
- human oversight; and
- robustness, accuracy, and cybersecurity.
In the event of a breach, the requirements will allow national authorities to access the information needed to investigate whether your AI complies with the law.
3. Limited risk
For limited-risk AI, you need to comply with specific transparency requirements. For example, your AI users should know that they are interacting with AI.
4. Minimal risk
AI that is not banned, high-risk, or limited risk falls within this category. You can develop and use minimal risk AI subject to your national laws. You may also choose to apply the requirements for trustworthy AI and follow the voluntary codes of conduct.
Machinery regulation
Machinery products refer to a wide variety of consumer and professional products. Examples include robots, lawnmowers, and 3D printers.
The Machinery regulation promotes the safety of users and consumers while encouraging innovation. While the AI regulation addresses the safety risks of AI, the Machinery regulation ensures the safe integration of AI into machinery. Where both the AI regulation and the Machinery regulation require an organisation to complete a conformity assessment, the organisation will only need to complete one assessment.
Plus, the Machinery regulation responds to market needs by:
- creating legal certainty, and
- simplifying organisations’ administrative burden and costs for organisations.
Coordinated plan
The Coordinated plan deals with how the EU will administer the regulations. It also suggests a vision to accelerate AI investments. Further, it provides the foundation for harmonised national AI strategies for EU member states.
Next steps
The European Parliament and the EU’s member states would need to adopt the Commission’s proposed AI and Machinery regulations through their normal law-making process. If they do, the rules will apply across the EU. Plus, the Commission would collaborate with the EU’s member states to implement the actions of the Coordinated Plan.