Why should you care about classifying AI systems under the EU AI Act? If data is the new oil, then AI systems are its refineries. In the same way, pollution is a problem in the world of petrochemical automation. There are inherent problems in the world of automated data processing through AI systems we must guard against. As AI becomes an increasingly important part of our daily lives, the European Union has responded by proposing the European (EU) Artificial Intelligence (AI) Act. Its purpose? To create a human-centric, trustworthy framework for AI development and adoption whilst respecting EU values and fundamental rights and addressing AI-related risks and challenges.
AI system classification: why it matters
The European Artificial Intelligence Act classifies AI systems into four categories. The criteria for classification include the system’s intended purpose and potential impact on health, safety, or fundamental rights.
- Unacceptable-risk AI systems are prohibited due to their discriminatory outcomes and violations of rights to dignity, non-discrimination, equality, and justice. Examples include social scoring systems.
- High-risk AI systems are subject to legal requirements and classified into two subcategories: (i) those that are safety components of a product or the product itself under EU harmonisation legislation and which require a third-party conformity assessment; and (ii) those listed in specific sectors, including critical infrastructure management, law enforcement, and migration management.
- Limited-risk AI systems require transparency obligations. Examples include chatbots and deepfakes.
- Minal or low-risk AI systems must comply with existing legislation. Examples include spam filters and video games.
The impact of the classification of AI systems under the EU AI Act on businesses
The European Artificial Intelligence Act will create a single AI market, enhancing legal certainty and trust. However, the potential costs of compliance are debated, with estimates ranging from €31 billion over five years to much cheaper when considering the benefits of regulation. Noncompliance penalties can reach up to €30 million or 6% of global income for companies. It will be much more expensive for those developing or deploying high-risk AI systems than those doing the same with limited or low-risk AI systems.
The European Artificial Intelligence Act aims to ensure responsible AI development and use, but compliance is critical. Stay ahead of the curve by letting us help you navigate the ever-evolving world of AI regulation.
Actions you can take next
- Ensure compliance with the European Artificial Intelligence Act by seeking legal advice and guidance. We have a page all about how we can help you with AI law.
- Evaluate your AI systems for risk level, and take appropriate measures according to the classification.
- Keep up-to-date with the ongoing discussions and potential amendments to the Act by subscribing to our newsletter.