Welcome to the law in 2025! At the beginning of each year, we gaze into the future to help you prioritise your next steps. This is the law regarding digital, data and tech in 2025. We try to predict what will happen and what it will mean for you. Our expertise guides you on where to spend your time and energy.
Many wise people have said ‘it is difficult to make predictions, especially about the future.’
We agree, but that doesn’t stop us from trying anyway. We try our best to get it right, and sometimes we don’t. Last year, the majority of our 2024 predictions were accurate. We aim to help you allocate your resources effectively to navigate the next 12 months.
Intelligence will be a central theme
Humans have always valued intelligence, arguably the one thing that differentiates us from all other creatures on Earth. In 2025, ‘intelligence’ will become a central theme rather than artificial intelligence. The metric will be whether something is intelligent, not where the intelligence came from. People will value intelligence, whether artificial, human, organic, or a combination of intelligence.
We’ll differentiate between the different kinds of intelligence.
The best intelligence will come from a combination of artificial and human intelligence. Currently, technology producing artificial intelligence is not as good as human intelligence, but a combination will be the killer. A good example is using Legal AI to draft and review contracts.
In FY 2025, Microsoft is on track to invest approximately $80 billion to build AI-enabled data centres to train AI models and deploy AI and cloud-based applications worldwide. The private sector in the US is investing up to $500 billion to fund infrastructure for AI. Nobody would invest so much money in something that wouldn’t provide a good return. Maybe tech companies know more than the general public at the moment.
Intelligence training (including AI training) will be important.
There will be a tech race
It’s like the arms race but not to build up weapons, but technology rather
In 2025, nations, tech companies, and startups are competing fiercely to lead in innovation. This ‘tech race’ focuses on breakthroughs in AI, quantum computing, space exploration, and clean energy. These advances promise significant benefits but also bring risks, such as ethical dilemmas, geopolitical tensions, and unequal access to technology. Governments and businesses must balance rapid innovation with fair and sustainable practices to make progress responsibly.
Innovation will trump safety
Countries worldwide, not just the United States, will prioritise innovation over safety regarding technology and the use of data. President Trump will follow an approach that deregulates several industries, and it will be full steam ahead. The EU will be forced to follow a more innovative approach to keep up with others (like the UK and the US). For those of us responsible for governance, risk and compliance (GRC), it will be a challenging year trying to hold onto the horse. Implementing effective and practical compliance programmes will be more critical than ever. So will ensuring that your staff use AI in an acceptable and responsible way.
Parliaments will not enact many laws
Globally, there has been a flurry of laws regulating AI, data protection and cybersecurity, such as the EU AI Act, the Digital Operational Resilience Act (DORA) and NIS2. Within South Africa, proposed amendments to the Consumer Protection Act (CPA) Regulations and guidance from the regulator on direct marketing are also shaping legislation. With these potential changes in mind, it is unlikely that Parliament will enact new laws on these issues, and organisations should take a cautious approach and wait until these developments become law. We are unlikely to see the finalisation of amendments within 2025.
Courts will play a bigger role in tech regulation
Courts are stepping in to address tech issues faster than lawmakers can keep up. Because technology evolves so quickly, there’s often a lag in passing new laws. In the absence of laws and regulations, people are turning to the courts for answers. While the court process is costly and lengthy, it is still generally faster than the legislative process.
The judiciary will keep the legislature and executive in check.
Political instability in many countries is making it even harder to get legislation passed, and as the legislative progress slows, courts are taking a bigger role in shaping tech regulation. As organisations focus on innovation instead of compliance, especially regarding AI, the courts will decide on key issues that will shape the regulation of AI in the future, with issues like intellectual property ownership and data protection compliance likely to be decided in the courts as opposed to Parliament.
The cracks in AI will begin to show
As artificial intelligence becomes more pervasive, its flaws and limitations are becoming harder to ignore. From biased algorithms to accountability gaps and technical errors, the cracks in AI systems pose significant challenges for businesses, individuals, and regulators alike.
There will be a content crisis
In the wake of generative AI, creating content has never been easier. However, the proliferation of digital platforms has led to an overwhelming surge in content creation, resulting in challenges related to content quality, misinformation, and information overload. The content crisis is twofold.
- An information overload: between the rapid adoption of generative AI and the increasing accessibility of social media, there has been a content boom. People are being bombarded with content on all platforms at all times, making it difficult to meaningfully engage with content and filter through what is relevant and accurate.
- A rise of misinformation: there has been a shift toward the democratisation of media, leading many people away from traditional news sources and toward individual online content creators. Everyone with an internet connection can now publish content. This is great when you’re seeking different perspectives, but when it comes to the facts, you want content you can trust. Not all content creators fact-check, conduct proper research, and present accurate and fair information in the same way that one would expect those in traditional media to do.
From a legal perspective, this can create a host of issues with far-reaching implications. Knowledge is power, and incorrect information could steer us into unintended waters, especially when the content is global.
People will lose trust in social media
User trust in social media platforms is eroding due to privacy scandals, misinformation, and concerns over algorithmic bias. In 2025, these issues will significantly reshape the digital landscape, leading to stricter regulations and changing user behaviours. We are already seeing signs of this in the wake of the TikTok ban and Australia’s move to ban social media for children under 16.
There will be a polarisation of privacy regulatory regimes
Privacy regulations are taking two distinct paths in 2025. In the United States, laws focus on encouraging innovation and flexibility, adapting to the needs of specific industries. Meanwhile, the European Union prioritises strict protections like the General Data Protection Regulation (GDPR), which upholds personal rights and product safety. This divide challenges global businesses, especially when transferring data across borders. Companies must develop strategies that meet the varying demands of these contrasting approaches while maintaining trust and legal compliance.
Countries will close borders for data – not just people
Cross-border data transfers will continue to be topical in 2025 but from a different perspective. More countries will restrict data transfers, including personal data, outside their borders. Data sovereignty, which ensures that data is subject to a particular country’s laws regardless of where it resides, will become more important. Tariffs, sanctions, and trade wars will profoundly impact the flow of data, leading to an increase in data localisation measures worldwide.
People will claim damages
Data breaches are now seen as inevitable in today’s information age. Data protection laws also provide data subjects with various rights in relation to their personal data. In addition to this, most data protection laws allow affected data subjects to claim damages when their personal data is not protected in line with the law.
We will see more claims for damages related to a variety of breaches, which will include breaches related to direct attacks on data, such as hacking but also breaches due to the the incorrect and often negligent use, handling, storage of personal data and general non-compliance with data protection laws.
Everyone needs to be AI literate
All organisations will work hard to improve the AI literacy of their staff through AI training. First, organisations will want to get a return on the investment that they have made in AI tools. Secondly, because the EU AI Act requires all staff to be AI literate. This is easier said than done and many organisations will struggle to do this effectively. AI training needs to be very practical, hands-on and relevant to the AI staff are using to achieve processes more intelligently. Many organisations will fail to realise that it is largely a change management process. People will resist changing how they do things using AI.
AI will change your organisation’s strategy
The impact of AI adoption goes beyond the realm of compliance. It will result in both opportunities and risks that may fundamentally change the approach of the business. Strategy is the responsibility of the governing bodies (those that fulfil their duties properly). Governing bodies will need to embrace the changes that AI will bring and adapt accordingly.
They will need to determine AI strategy and, once they have decided how they will use AI, address three important questions:
- Is the technology available adequate to adopt AI?
- Is the data required available and does it have the integrity necessary to achieve correct output?
- Does the business have the skills to adopt AI?
If not, how will the identified deficiencies be rectified?
Some organisations will leapfrog others by leveraging Generative AI
Organisations that strategically embrace Generative AI will gain a significant competitive edge. By leveraging AI to optimise workflows, personalise customer experiences, and drive innovation, these companies will outpace those slower to adopt the technology. Industries like marketing, legal, and customer service will see transformative efficiencies, while early adopters will redefine standards in their sectors. However, organisations must balance this advantage with ethical considerations and robust governance to ensure responsible AI use.
There will be many commercial opportunities
With deregulation and a pro-innovation approach, there will be many commercial opportunities in 2025. Opportunities to commercialise data and to leverage AI. Many organisations will make a lot of money doing both of these things. The focus will be on making money and not ensuring that these opportunities are pursued lawfully. The intelligent organisation will both make money and do it lawfully because they know that trust is crucial to sustainability.
There will be sector-specific AI regulation
AI regulations will become industry-specific in 2025. Healthcare faces strict rules to protect patient privacy and ensure accurate algorithms. Financial services must prevent bias in lending and strengthen fraud detection—the transportation sector deals with safety standards for AI in areas like autonomous vehicles. Governments are adapting regulations to the unique challenges of each field. To keep up, businesses must deeply understand and follow the compliance rules for their industries.
AI Compliance Risks – AI in finance, healthcare, and insurance faces scrutiny over bias, transparency, and intellectual property rights.
Authorities will struggle to enforce data laws
Despite their best efforts, the outcomes of enforcement action taken by authorities in 2025 will be limited. Organisations against whom authorities take enforcement action will do everything in their power to avoid or delay the negative consequences of enforcement action. Large organisations will have the financial muscle to appeal decisions in courts. In the South African context, the law simply does not give the information regulator the teeth it needs. With legal proceedings in court being lengthy and costly the information regulator will struggle to get organisations or infringes to actually pay the fine.
AI will revolutionise legal and compliance
Artificial intelligence is changing the way legal and compliance teams operate in 2025. AI tools handle repetitive tasks like contract analysis and regulatory monitoring, freeing professionals to focus on strategy. Compliance teams use AI to track risks and stay updated on changing laws. This improves efficiency and accuracy but raises concerns about bias and accountability. Organisations need clear ethical guidelines and robust oversight to use AI responsibly.
Cybersecurity will become even more critical in the face of AI-powered cyber threats
Last year saw an increase in the integration of AI in many facets of life. This trend will continue, and the adoption and use of AI will result in different attack vectors being used by bad actors. ‘Safety’ is a critical AI principle that will need to be addressed by cybersecurity specialists who will need to ensure that AI models and the data used to train AI models are appropriately safeguarded. In addressing personal data used in AI, the data protection principles regarding personal information will apply, and processors of personal information will have to review AI risks and respond accordingly.
The development and enactment of laws to address these threats will not keep pace with AI development and novel uses. It will be up to business to protect itself, its clients and other stakeholders.
The EU AI Act will start to have an impact
The EU AI Act will begin reshaping the AI landscape, influencing how businesses develop and deploy AI technologies. Its risk-based approach will impose stricter requirements on high-risk AI systems, including transparency, accountability, and data protection standards. Companies operating in or with the EU will need to align with these rules, potentially driving changes in global AI practices. This regulatory framework will encourage safer and more ethical AI use but may also pose compliance challenges for organisations.
We hope to empower you to navigate the law in 2025 and stay ahead of the rest by reading our 2025 predictions.