The White House Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights. It is the first of it’s kind. They also provide a handbook to accompany the principles in the AI Bill of Rights for those who would like to incorporate the principles into their policies and practice. The blueprint is a guide for governments and the private sector to practice the principles. You should use this framework when you’re using automated systems that impact the public’s rights, opportunities or access to critical needs.
What is in the AI Bill of Rights?
The principles in the AI Bill of Rights aim to prevent the use of technology, data and automated systems that threaten people’s rights. The use of technology, data and automated systems can limit some people’s opportunities and access to resources and services. We have seen that automated systems can cause harm to people and the aim is to prevent that harm.
Technology, data and automated systems are not only harmful they have also brought many benefits. Technology tools have progressed significantly but should not be at the cost of civil rights and democratic values.
To balance the progress of technology and the benefits it brings the White House Office of Science and Technology Policy identified 5 principles to guide the design, use and deployment of automated systems to protect the public as the use of artificial intelligence grows.
What are the 5 principles in the AI Bill of rights?
You should be protected from unsafe or ineffective systems.
You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
You should know when an automated system is being used and understand how and why it contributes to outcomes that impact you.
You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
Who should care about the AI Bill of Rights?
- Designers, developers and deployers of automation systems.
- Organisations that use automation systems to provide services even if it doesn’t design, develop or deploy the systems themselves.
- Procurement teams look for automation systems to use in their organisations.
- Governments or legislators who are considering creating AI regulations.
Applying the AI Bill of Rights Blueprint
The guide accompanies the principles. It sets out:
- why the principles are important,
- the practical steps you can use to implement each principle;
- the reporting expectations and
- practical examples of each principle.
Safe and Effective Systems
- Diverse communities should give their input at various levels.
- Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring.
- Don’t deploy or use a system if it is unsafe or its use is beyond its intended purpose.
- You shouldn’t design a system with the intent or reasonably foreseeable possibility of endangering people’s safety or the community.
A practical example:
The National Science Foundation funds research to foster the development of automated systems that adhere to and advance their safety, security and effectiveness. They support research that directly addresses the principles in the AI Bill of Rights.
Algorithmic Discrimination Protections
- Take continuous proactive measures to protect individuals and communities from algorithmic discrimination.
- Use and design systems in an equitable way. To achieve this you must include proactive equity assessments as part of your system design.
- Use representative data and not proxies for demographic features.
- Ensure accessibility for people with disabilities in the design and development stage of the system.
- Conduct pre-deployment and ongoing disparity testing and mitigation.
- Create a clear organisational oversight of the system.
- Reports must be in plain language and made public wherever possible to confirm that the system complies with algorithmic discrimination protections.
A practical example:
Large employers have developed best practices to scrutinise the data and models they use for hiring employees. They call it the Algorithmic Bias Safeguards for the Workforce. Companies can use the questionnaire when they use software to evaluate potential employees.
Data Privacy
- You must design your automated systems with privacy in mind and include privacy protections by default.
- You must collect data for only as strictly necessary and for the reason for collection.
- Obtain consent from data subjects where necessary and respect their decision about the collection, use, access and transfer of their data.
- Do not design systems that obscure data subject’s choices or create defaults that intrude on their privacy.
- Consent requests must be short and in plain language and give data subjects choices.
- Surveillance technology must include pre-deployment assessments that can check for potential harm and limit the invasion of privacy.
- Don’t use surveillance systems that affect people’s rights, opportunities or access, especially in education, work and housing environments.
A practical example:
To ensure compliance with data privacy regulations, a healthcare app is developed with privacy-centric features. The app only collects essential patient data for diagnosis, seeks explicit consent for data usage, and presents concise consent requests in plain language. The app includes an AI-powered tool that assesses potential risks before deploying new features, ensuring sensitive medical information remains private and doesn’t hinder patients’ access to quality healthcare services.
Notice and Explanation
- You must provide documentation and clear descriptions of your system’s function and the role automation plays in your system.
- Let users know that the system is in use and the system’s outcome in a clear and accessible way.
- You must provide users with a notice about who is responsible for the system in your organisation.
- You must always keep your notices up to date and provide key functionality changes.
A practical example:
The private sector in Illinois notifies people when it uses their biometric information. The state enacted the Biometric Information Privacy Act in order to govern the way the government or private sector uses people’s biometric information. It prevents the processing of biometric information without written notice to the individual.
Human Alternatives, Consideration, and Fallback:
- You must allow users to opt out of automated systems and provide a human alternative instead.
- The system needs to have a fallback and escalation process in place if it produces an error, fails and it must allow an individual to appeal or contest a decision made about them.
- The human intervention process should not place an unreasonable burden on an individual or the public.
- If you use automated systems in sensitive domains you must adapt the system to the purpose and provide meaningful access for oversight of the system. You must include human intervention as necessary for adverse or high-risk decisions.
- You must train the people in your organisation who will intervene as an alternative to the system.
A practical example:
In the customer service industry organisations have successfully used automated services like chatbots and AI-driven systems to help customers but allow human intervention where escalation is necessary or when the customer requests it.
Resources
- The statement from the Whitehouse about the Blueprint.
- The web version of the Blueprint for AI and AI Bill of Rights.