In the ever-evolving world of Artificial Intelligence (AI), keeping pace with the expanding capabilities of general-purpose AI (GPAI) systems is crucial. To ensure responsible use and transparency, lawmakers in the European Union (EU) have established clear guidelines for general-purpose AI under the EU AI Act. In this post, we’ll break down these rules shedding light on the transparency requirements and additional obligations for high-impact GPAI models.
What is general-purpose AI?
GPAI, or Artificial General Intelligence (AGI), refers to machines equipped with the capability to perform a wide range of intellectual tasks, mirroring the cognitive abilities of humans. Unlike specialised AI designed for specific tasks, GPAI strives to emulate human-like intelligence across various activities.
How does general-purpose AI work?
Operating on the foundations of machine learning and deep learning, GPAI utilises algorithms to analyse extensive datasets, learn from the information, and make informed decisions or predictions. Let’s explore two key components in simpler terms:
-
Learning from Data:
Consider training a dog to fetch a ball. Initially, the dog is rewarded with treats for successfully retrieving the ball. Over time, the dog associates fetching the ball with receiving treats. Similarly, GPAI learns from data – for example, recognising cats by analysing numerous cat images to become proficient in identification.
-
Neural Networks:
Think of neural networks as the brains behind AI systems. Inspired by the human brain, these interconnected networks consist of nodes that process and analyse data. Each node represents a feature of the input data, working collaboratively to identify patterns and comprehend complex information.
Some practical examples:
Services like Siri, Google Assistant, and Alexa exemplify GPAI in action. These virtual assistants comprehend natural language, interpret voice commands, and execute tasks such as setting reminders, answering queries, and controlling smart home devices.
Self-driving cars leverage GPAI to navigate intricate environments. Equipped with sensors and cameras, these vehicles collect data about their surroundings. The AI interprets this information, making real-time decisions like steering, braking, and accelerating.
Services like Google Translate employ GPAI to understand and translate text across languages. Learning linguistic patterns from extensive datasets enables the AI to deliver accurate translations.
Transparency Requirements
To make sense of the vast tasks AI systems can handle and their rapid advancements, the EU Parliament set rules for GPAI systems. These rules apply to the GPAI models they are built upon. Here’s a simplified rundown:
- Technical Documentation:
- GPAI systems must create clear technical documentation as described in annex IV of the AI Act. Think of it as a manual that explains how the AI works, making it easier for everyone to understand.
- Copyright Compliance:
- GPAI systems must follow EU copyright law. Just like we respect copyrights for books and music, AI systems need to play by the rules too.
- Detailed Summaries:
- Transparency means sharing. GPAI systems should provide detailed summaries about the content they use for training as detailed in article 13 of the AI Act. This helps us know what the AI has learned and how it makes decisions.
High-Impact GPAI Models and Systemic Risks:
For those high-impact GPAI models that carry systemic risks, the EU Parliament has gone the extra mile to secure stricter obligations. Here’s a breakdown of what this means:
- Model Evaluations:
- High-impact GPAI models meeting specific criteria must undergo evaluations. It’s like a check-up to ensure the AI is doing what it’s supposed to do.
- Systemic Risk Assessment and Mitigation:
- The AI needs to assess and fix any systemic risks it might pose. This is akin to making sure there are no unintended consequences or problems caused by the AI.
- Adversarial Testing:
- Think of adversarial testing as challenging the AI. It ensures that even when faced with tricky situations, the AI responds correctly.
- Reporting Serious Incidents:
- If something goes wrong, the AI must report it to the Commission. This accountability ensures that any issues are addressed promptly.
- Cybersecurity Measures:
- Just like protecting our online accounts, GPAI models must ensure strong cybersecurity. This safeguards against unauthorised access or misuse.
- Energy Efficiency Reporting:
- Being responsible also means being mindful of resources. High-impact GPAI models must report on their energy efficiency, contributing to sustainability.
Insights
Understanding the rules governing general-purpose AI under the EU AI Act is vital for ensuring a responsible and transparent AI landscape. As we embrace the capabilities of these intelligent systems, it’s reassuring to know that policymakers are actively working to strike the right balance between innovation and accountability. Until the EU establishes unified standards, GPAIs with systemic risks can rely on codes of practice to comply with regulations. This temporary measure ensures that everyone plays by the same rules while waiting for official guidelines.