Recently, the International Organization for Standardization (ISO) published the ISO/IEC TR 24027:2021 standard. In essence, the document addresses bias in AI systems, especially where AI aids humans in decision making. The standard also provides measurement techniques and methods for assessing bias across the AI development lifecycle. The aim is to address and treat bias vulnerabilities in AI systems.
Overview of the ISO/IEC TR 24027:2021
How bias manifests
‘Bias in artificial intelligence (AI) systems can manifest in different ways. AI systems that learn patterns from data can potentially reflect existing societal bias against groups. While some bias is necessary to address the AI system objectives (i.e. desired bias), there can be bias that is not intended in the objectives and thus represent unwanted bias in the AI system’.
Removing bias is challenging
‘Developing AI systems with outcomes free of unwanted bias is a challenging goal. AI system function behaviour is complex and can be difficult to understand, but the treatment of unwanted bias is possible. Many activities in the development and deployment of AI systems present opportunities for identification and treatment of unwanted bias to enable stakeholders to benefit from AI systems according to their objectives’.
Topics covered in ISO/IEC TR 24027:2021
The topics include:
-
an overview of bias and fairness;
-
potential sources of unwanted bias and terms to specify the nature of potential bias;
-
assessing bias and fairness through metrics; and
-
addressing unwanted bias through treatment strategies.
Actions you can take
- Stay updated with the latest AI law news by subscribing to our newsletter.
- Determine how AI impacts your organisation by asking us for an AI risk assessment.
- Protect your commercial interests by asking us to draft your AI contracts.
- Discover more about AI by reading our AI law page.