Asking who is responsible for AI compliance is like asking who should hold the umbrella in a thunderstorm. Everyone wants to stay dry, but no one wants to stick their hand up and risk the lightning. The reality is, though, that the EU Artificial Intelligence Act imposes certain obligations on organisations developing or using AI, and someone has to be in charge of sticking to them. This isn’t just about avoiding getting wet; it’s about making sure the organisation weathers the storm in all its unpredictability.
Who is responsible for AI compliance, according to the Act?
The EU AI Act doesn’t describe a specific person who is responsible for AI compliance. Instead, it casts the net wide and holds the organisation itself accountable. Ultimately, the general flow of corporate governance puts the responsibility for compliance in the hands of the governing body. This makes sense: the governing body decides on business strategy, and AI is a radical change to the strategy of whole industries.
Strategy is decided by governing bodies.
If a business fails to comply with the Act, it may be fined. If it can’t pay that fine, other applicable laws may apply. In most jurisdictions, this would include questions about the governing body’s duties, which could lead to them being personally liable.
Getting governance right
But in most businesses, “the governing body is responsible” isn’t a practical answer to the question. The governing body can’t do everything itself, and will have to assign the relevant role and responsibilities to someone else. They must also provide leadership, and make sure that the person who is responsible for AI compliance has the skill and support to get the job done.
In our opinion, this person or team must have a multi-disciplinary background. This equips them to understand the various impacts of AI on your organisation, including IT, legal, compliance, privacy, and more. This approach is the best to ensure that the business leverages AI’s benefits, pivots with its changes, and complies with the law.
Who should be responsible for AI compliance?
There are a few options of where to find this multi-disciplinary golden child, who is responsible for AI compliance and leverage. In our view, they could come from:
- IT or Information Security. This works best if your organisation focuses on developing AI in-house, to make sure that your legal obligations integrate deeply with development.
- Legal Risk & Compliance. This works best in mature organisations with a strong risk function, and who can afford a team dedicated to closely monitoring your actions in the AI space.
- Privacy & Data Protection. This is more practical for smaller companies who might not have the space for a dedicated AI team, but who will almost certainly already have a multi-disciplinary team managing matters that will likely have a significant overlap with how your organisation will use AI, like privacy and information security.
Wherever they come from, the person responsible needs the authority of the CEO and the governing body. More than this, they also need to communicate closely with them. This is essential, considering how quickly and radically AI can change.
Actions you can take
If you’re keen to know more about who is responsible for AI compliance, please reach out to us.
- Understand the impact of AI on your business by doing an AI Impact Assessment.
- Learn more about AI by attending an event.
- Get governance for AI compliance right by joining the Michalsons Trustworthy AI programme and working through the getting governance right module.
- Brief your governing body on AI compliance by asking Michalsons to help you.