The AI policy for public healthcare in the Western Cape (WCG) is in motion. The policy promotes the use of AI tools to address institutional capacity constraints and varying skill levels. Meaning that developers and deployers of AI-enabled healthcare tools in the Western Cape must take action.

This is to ensure business continuity and the uninterrupted application of these tools, given the critical nature of the healthcare sector. Read on for a summary, the impact on your organisation and how we can help.

The AI policy for public healthcare

The WCG AI policy for public healthcare sets a national precedent for ethical AI adoption in South African healthcare. The policy provides legal certainty and establishes a framework for AI governance consistent with international standards. Key ethical principles include:

  • Beneficence – AI must enhance healthcare delivery.
  • Non-maleficence – protect patient privacy and minimise harm.
  • Human oversight – humans retain final decision-making authority.
  • Fairness – AI systems must not reinforce discrimination.
  • Transparency – people must know when AI tools are used and who is accountable.

AI’s role in South African healthcare

The WCG AI policy for public healthcare promotes equitable access to AI tools across the public healthcare system. While AI’s role in South African healthcare continues to expand, adoption remains unequal. Private sector adoption is advancing faster, while public healthcare lags behind. This is due to stronger data science, financial capabilities, and policy gaps that hinder the adoption of technologies not formally supported by government frameworks.

Technological advances and economies of scaled have lowered barriers of access to AI – enabled devices.

Equity means fairness and justice for all. Measuring equity in algorithmic performance is complex, especially when such metrics are not included by design. So AI-enabled tools must be applied in a context that maximises people’s health potential to promote health equity. For example, computing power is now more accessible, meaning that the financial barriers obstructing the public health sector’s adoption of AI tools becomes lower. Nvidia has released a portable supercomputer with the same processing capability used to train ChatGPT, requiring only 200 watts of power.

AI use cases under the AI policy for public healthcare

AI in South African healthcare is used for predicting interruptions in HIV treatments, identifying drug-resistant tuberculosis and early cancer detection. AI-enabled health tools are also emerging in areas such as paediatric care, heart failure prognosis, and mental health risk analysis. These tools make healthcare more accessible, as there are currently only 0.31 doctors per 1,000 people. A good case study is the AI-enabled retinal screening pilot in Khayelitsha, which shows how these AIs can improve detection, delivery, and patient safety throughout public health systems

So the AI policy for public healthcare comes at a good time to ensure that the public sector is not left behind. Given the sensitivity of health care activities, developers and deployers must also be guided by ethical and regulatory considerations.

AI policy for public healthcare and existing regulatory requirements

To enhance healthcare delivery, the AI policy in public healthcare outlines access to high-quality special data, AI literacy, and a robust compliance culture. These align with the priorities outlined in the South African AI Policy Framework. Together, these AI policies identify privacy, safety, explainability and fairness as a core strategic pillars for AI governance in South Africa

AI governance in public healthcare

Companies advancing responsible AI governance are linked to better business outcomes.

AI governance is the mitigation of known risks of AI tools and to ensure that AI use is aligned to business goals. AI-assisted healthcare decisions create accountability challenges, especially when human oversight is limited. This opens a gap in governance structures. To address this gap health institutions in South Africa must invest in explainable AI. Explainable AI allows for prediction accuracy and traceability of decisions. This promotes accountability as well as an understanding of the decisions made by the AI tool. Explainability also gives rise to meaningful human oversight as prescribed by the WCG AI policy for public healthcare. Human oversight ensures that humans retain the final decision-making authority.

A way to operationalise oversight is by having an oversight Committee, which a recent survey has shown leads to measurable gains in revenue, employee satisfaction and cost savings. In healthcare specifically, human oversight also guards against risks of AI contributing to incorrect diagnoses or adverse, fatal outcomes.

Transparency needs AI literacy

The WCG AI policy for public healthcare requires developers to ensure transparency. To ensure that both patients and healthcare professionals understand when AI is used and how it generates its outputs. Managing an AI training programme will be important to achieve this objective. This is because AI literacy empowers both deployers and affected persons to mitigate against risks posed by AI. By understanding what the technology is and what it can reasonably achieve. People will be able to make informed decisions on when to use AI-enabled tools and also spot malfunctions and errors more effectively.

The AI policy for public healthcare mandates data protection

AI-enabled health tools depend on large, often special personal information. Compliance with POPIA is essential to safeguard patient confidentiality and avoid legal liability. Health data qualifies as special personal information under Section 26 of POPIA. The regulator has given guidelines on the processing of health or sex‑life information. This, along with guidelines by the EDPB, can ensure that AI training on health data remains compliant with data protection laws. Compliance with POPIA also promotes good data governance. This ensures the data used is high-quality, consistent, secure, and compliant. In turn, data governance prevents flawed outputs and bias.

What about the Cybersecurity joint standards?

AI tools in public healthcare support overstretched systems where service interruption can have life-threatening consequences. A study by Anthropic on data poisoning attacks demonstrated that as few as 250 malicious documents can compromise a large language model, rendering it vulnerable to manipulation by threat actors. This underscores the fragility of AI systems and the need for robust cybersecurity controls. The WCG AI policy does not seem to explicitly address these risks, so developers must embed security during the design stage. Deployers must also consistently monitor AI tools. The absence of clear cybersecurity mandates in current policy frameworks represents a critical oversight that must be rectified.

How does the AI policy in public healthcare impact you?

Following the World Health Expo in Cape Town, it is evident that South African companies are actively developing AI solutions for healthcare.

  • For developers of AI-enabled health devices, careful alignment with the policy’s objectives and international best practices, such as those from the EDPB, is essential.
  • If you are deploying AI-enabled tools in a healthcare setting, you must ensure compliance with POPIA, implement explainable AI systems, prioritise data governance, and embed cybersecurity from the outset.

If you are developing or deploying AI-enabled tools in your organisation, contact our team of experts to help your organisation align with legal and ethical standards for AI in public health.