If you’re considering how to govern AI, why not govern AI like a human?

AI is not like traditional tools. Unlike a hammer or a spreadsheet, AI tools exhibit evolving behaviour by learning and adapting as they interact with people and data. Moreover, outputs generated by AI are non-deterministic, so unlike your traditional software, the same input will not always result in the same output by the AI tool. This difference means AI cannot simply be governed through a static technical lens. Instead, organisations must approach AI governance in the same way they would human governance, by managing behaviour, setting boundaries, and enforcing accountability.

AI governance is not just a technical necessity; it’s a business imperative for organisations aiming to scale responsibly.

Rules for tools and people

The relationship between human and tool is one of clear control. A tool, by definition, is an object used to increase human efficiency, but it only ever acts with intentional guidance. The human decides the application, the limits, and the timing. This unwavering control creates a distinction in terms of capability and, consequently, responsibility. Historically, strategic organisational governance has reflected this separation with straightforward structures.

  • Tools are governed through technical standards and predictability. Rules and regulations define their physical limits, for example, the required size and quality of a bolt in a car battery. Since tool failures are generally predictable, oversight is a matter of mechanical inspection and compliance.
  • People are governed by norms, access rules, and accountability. Human access to sensitive data, for instance, is granted on a need-to-know basis and revoked when no longer required. When errors occur, responsibility can be traced based on clearly defined accountability mechanisms within organisational governance structures.

AI sits at the cusp of the two. So when thinking about how to govern AI. Its important to understand that AI requires a blended approach that encodes technical considerations as well as behavioural norms. This is because its outputs shift not only by design, but also as it learns and adapts. Much like people, AI can find unexpected ways to achieve its goals, sometimes with unintended consequences. But unlike people, AI tools are both environmentally and digitally interacting, requiring robust cybersecurity and information security controls to guard against malfunctions, errors and misuse due to sample poisoning.

How to govern AI behaviour

To mitigate risks and safeguard business assets, organisations must govern AI like a human by treating AI like a dynamic agent. This means going beyond the human – in – the – loop or risk-based frameworks, as these categorical frameworks provide implementation clarity but fail to account for the hybrid, evolutionary nature of AI tools in practice. Instead, organisations must govern AI like a human. This means AI governance must manage AI behaviour through direct and frequent oversight and monitoring. Limit the autonomy and access of AI tools by setting boundaries and establish clear accountability for decisions. This enables a clear line from an AI decision to a specific human individual. Just like how managers are responsible for the work of their teams.

The Human-AI Governance Framework specifically highlights governance dimensions such as autonomy, accountability, and decision authority. The framework shows how AI may be useful in some instances, but should not be a decision maker in all. This is because full automation would compromise accountability and fail to manage the complex, evolving risks associated with autonomous systems. Just as decisions made by an employee must go through various oversight procedures, their powers are limited. Governance structures for AI should evolve as AI capabilities grow.

Practical steps on how to govern AI

Trustworthy AI doesn’t require reinventing the wheel. It’s about embedding practical governance into existing structures, much like onboarding and supervising employees:

  • Structured onboarding for AI systems, including ethical guidelines and purpose limitations.
  • Value encoding to ensure AI reflects company policies and societal norms.
  • Cross-functional oversight, sometimes through Chief AI Officers or governance committees.
  • Monitoring and auditing to catch misuse, bias, or failures early.

AI governance should avoid overly complex structures that overwhelm rather than enable. This requires that your existing governance framework is functional, and that there are robust technical and information controls in place. Where it isn’t, it’s worth reviewing and presenting in plain language with clear, logical decision-making flows, ensuring alignment with best practices in data protection and cybersecurity standards. Such an approach complements the efficiencies introduced by AI tools and allows for governance that promotes trust in AI outputs, transparency, and accountability, enabling a real return on investment in these tools.

Why govern AI like a human

As AI systems integrate more deeply into operations, governance becomes critical. Effective strategic AI governance goes beyond technical controls. It requires behavioural oversight, ethical boundaries, and robust accountability structures. By governing AI like a human, organisations can reduce operational and reputational risks by ensuring the reliability of output through verification. Unlock AI-driven innovation to get a return on investment and ensure compliance with local and global regulations.

Contact us to help you govern AI like a human.