Meet Thulie. She’s an adept technology officer at a listed company that operates across several continents.
Over the past few months, she’s noticed a rapid increase in online conversations and the news about generative AI like OpenAI’s ChatGPT and Google’s Bard. It seems like everyone is talking about the technology, and her colleagues started using it too.
The risks of generative AI
At first, Thulie was excited. She’d heard about the incredible things that generative AI could do, like creating new forms of entertainment, automating processes, and even advancing healthcare. But as she delved deeper into the topic, she realised that there were also a lot of risks involved.
She saw that some companies were using generative AI to create deepfake videos that could be used to deceive people or to generate false news articles that could misinform the public. She also read about cases where facial recognition software powered by generative AI was biased against certain groups of people.
The need for an AI policy
These risks worried Thulie, and she began to think about the potential impact of generative AI on her own company. What if her colleagues inadvertently created unethical, biased, or inaccurate content? What if they unknowingly infringed on someone’s intellectual property rights or defamed someone?
Thulie realised that her company needed a policy to set standards and guidelines for using generative AI. And so, she embarked on a journey to create a policy on generative AI to ensure her organisation was using the technology safely and ethically.
Challenges along her quest
She faced many challenges along the way, from navigating legal and technical jargon to convincing her colleagues that a policy was necessary. But she persisted.
Developing the policy
She researched the best practices for creating a policy on generative AI and consulted with experts in the field. She developed guidelines for the appropriate use of generative AI, identified potential risks, and established procedures for managing those risks. And she ensured everyone in the organisation was trained on the technology and its ethical implications.
Through her adventure, Thulie learned that emerging technologies like generative AI could be exciting and daunting. It’s like walking through the streets of Johannesburg—full of possibility but also danger.
The impact of the policy
But with the right policies and guidelines, organisations can navigate this new frontier and reap the benefits of these emerging technologies. For example, Thulie’s policy helped her organisation use generative AI effectively and ethically. As a result, they used the technology to create new forms of entertainment, automate processes and advance healthcare while avoiding the potential risks of generative AI.
Thulie’s tale reminds us that emerging technologies like generative AI can be potent tools, but we must still use them responsibly. Plus, with the right policies and guidelines, we can embrace the opportunities presented by generative AI while also managing its risks.
Actions you can take next
- Set standards for using generative AI in your organisation by asking us to draft a policy on generative AI.
- Access our free generative AI acceptable use policy template.
- Protect your organisation from the legal risks of generative AI by joining our workshop on using generative AI lawfully.
- Manage the data protection risks of your AI projects by joining our data protection programme.