As the curtains draw on the intense negotiations over the EU AI Act, a new ‘post-agreement’ dawn breaks in artificial intelligence regulation. The Act, far from being a mere legislative text, represents a strategic move in the complex world of digital policymaking. With the agreement reached, we must shift our focus from the ‘what’ to the ‘how’ – precisely, how this Act will reshape the landscape of AI regulation and compliance. This article will provide a comprehensive analysis of the aftermath of the EU AI Act agreement, scrutinising its implications for enforcement, compliance, and global AI governance and addressing the challenge of balancing the need for speedy legislation with the necessity of thorough and effective drafting.

EU AI Act post-agreement analysis: Understanding the emerging framework

Although the final text of the EU AI Act has yet to be published, the political agreement that was reached marks a significant stride towards balancing innovation and regulation. The discussions leading to this agreement have seen substantial amendments from the initial proposals, especially concerning prohibited AI systems, enforcement mechanisms, and the risk-based categorisation of AI systems. These changes reflect an effort to balance the urgency of regulating emerging technologies with the need for comprehensive and effective legislation. Based on the agreement, the emerging framework suggests a nuanced approach that considers AI technologies’ diverse impacts and complexities. This analysis anticipates the Act’s potential structure and the critical focus areas, pending the release of its final text, expected in Spring 2024.

Immediate actions for member states and organisations

With the introduction of the EU AI Act, member states are tasked with enforcing this regulation, which will be directly applicable across the EU. This involves establishing mechanisms for regular monitoring and compliance checks of AI systems, particularly high-risk ones. Each member state must designate national supervisory authorities responsible for overseeing the implementation and adherence to the Act’s provisions.

Organisations utilising AI technologies must rigorously audit their AI systems to determine their risk category and ensure compliance with the Act’s requirements. This includes conducting conformity assessments, registering in an EU database, and undertaking continuous risk management, periodic testing, and stringent data governance for high-risk AI systems. They must also be prepared to take immediate corrective actions in case of non-compliance and report any serious incidents to the designated national authorities.

Enforcement landscape for the EU AI Act post-agreement: National and EU-level mechanisms

The enforcement of the AI Act will involve collaboration between national supervisory authorities and the proposed European Artificial Intelligence Board (EAIB). While the EAIB is not yet established, it is envisioned to play a central role in harmonising the implementation of the AI Act across the EU once it is formed. In the interim, national authorities will be responsible for the day-to-day application and enforcement of the Act. Their duties will include ensuring compliance, conducting checks, and imposing penalties where necessary.

Upon its establishment, the EAIB will be instrumental in facilitating effective enforcement and promoting best practices among member states. Its role is anticipated to mirror that of the European Data Protection Board, created under the GDPR, by providing overarching guidance and coordination to ensure a consistent approach to AI regulation across the EU. This collaborative structure between national authorities and the EAIB aims to balance fostering innovation and protecting fundamental rights and public safety in AI.

Impact on privacy and data protection professionals

The EU AI Act indirectly expands the responsibilities of privacy and data protection professionals. They must integrate AI governance within their scope, focusing on compliance for AI systems under the Act and GDPR. This includes understanding AI risk categories and aligning AI systems with data governance and protection standards. These professionals will play a key role in bridging AI technology with data privacy regulations.

Timeline and phased implementation for different risk categories

The EU AI Act’s implementation timeline varies according to the risk category of the AI system. The Act categorises AI systems into unacceptable risk, high-risk, and limited or low-risk, with each category having specific compliance timelines:

  • Unacceptable risk systems: These are entirely prohibited from the outset.
  • High-risk systems: Subject to stringent regulations and protective measures, high-risk systems have a relatively shorter timeframe for compliance. Organisations must implement these requirements within a 24-month transitional period after the law comes into effect. However, specific prohibitions will be enforced as early as six months, and particular requirements for general-purpose AI will apply after 12 months.
  • Limited and low-risk systems: These systems have less stringent and more extended compliance periods, focusing mainly on transparency obligations.

This tiered implementation approach allows organisations to effectively prioritise their compliance strategies, concentrating first on high-risk AI applications that require more immediate attention and resource allocation.

EU AI Act post-agreement: Global influence and setting a precedent

The EU AI Act is set to become a benchmark in global AI regulation. Its risk-based approach and comprehensive coverage of different AI applications will likely influence legislation in other jurisdictions. The Act’s emphasis on fundamental rights, transparency, and accountability in AI systems establishes a model that other countries could adopt or adapt, shaping the global discourse on ethical AI usage.

Preparing for future amendments and iterations

Given the rapid evolution of AI technology, the EU AI Act will likely undergo future amendments. Organisations must stay informed about these changes and be flexible in their compliance strategies. This involves regularly reviewing AI systems against current regulations, maintaining open channels with regulatory bodies, and investing in ongoing staff training and development in AI governance.

Actions you can take next

The EU AI Act post-agreement phase is a critical juncture, marking a significant step in shaping AI regulation within the EU and setting a global standard. Its comprehensive approach, balancing innovation with ethical and legal considerations, offers a robust framework for responsible AI deployment. You can:

  • Stay ahead in AI governance by joining our trustworthy IA programme to actively engage with the EU AI Act post-agreement requirements.
  • Ensure your organisation’s AI systems comply by instructing us to help you review them.
  • Prepare for the EU AI Act and other ongoing legislative developments to maintain a competitive edge in the evolving digital landscape by subscribing to our newsletter.
  • Read the current text of the EU AI Act.