Artificial intelligence (AI) is advancing quickly, but the legal systems regulating it are struggling to keep up. In other words, there are regulatory challenges in AI. The European Union’s AI Act represents a significant step towards control, but like testing a new aircraft mid-flight — it faces challenges. This article explores the difficulties in regulating AI, why there is a delay in regulations, and what this could mean for the future of AI development and oversight.

AI innovation is moving faster than regulation

AI technology is developing so quickly that lawmakers struggle to keep up. Recent breakthroughs, such as generative AI, have forced revisions to the EU AI Act, showing how slow the legislative process can be. By the time laws are passed, they may already be outdated, leaving gaps in oversight and creating potential risks. This is core of the regulatory challenges present in AI.

The EU AI Act: A case study of regulatory challenges in AI

The EU AI Act uses a new approach by classifying AI systems as high, medium, or low-risk, similar to how the manufacturers of physical products test them against industry standards before going to market. This differs somewhat from the principle-based approach of the General Data Protection Regulation (GDPR) in the context of data privacy. However, the late inclusion of rules for generative AI and the Act’s phased rollout raises doubts about whether it can keep up with fast-paced AI developments.

Regulatory challenges in AI: Contracts and systemic risks

As AI evolves, so do the challenges of staying compliant with regulations. For example, large-scale AI models also create systemic risks. The EU and the US are trying to limit computing power to control these risks, but such limits quickly become outdated as technology advances.

AI regulation around the world

While the EU is leading with its AI Act, other countries like the US are also developing regulations. This could lead to a fragmented global system, making it harder for AI companies to operate internationally. To avoid this, there is a need for harmonised international standards or collaborative frameworks to unify regulations across borders. This would help address some of the regulatory challenges in AI by giving us a more consistent approach across the world.

Actions you can take next

AI innovation is moving much faster than the regulations meant to control it. Even forward-thinking legislation like the EU AI Act requires continuous updates. More flexible regulatory approaches are needed to keep up with rapid technological changes.

To address the regulatory challenges posed by AI, policymakers must create flexible regulations that can adapt as technology evolves. Ongoing collaboration between regulators, industry leaders, and developers is essential to promote responsible innovation while ensuring alignment with societal expectations. You can:

  • Engage with policymakers to support the development of flexible regulatory frameworks.
  • Take part in public consultations to help shape future regulations that can handle the fast pace of AI development.
  • Learn more about AI regulation by reading our article on Artificial Intelligence Law.
  • Read OpenAI’s primer on the EU AI Act for a perspective from the industry.