Using AI systems without telling anyone is tempting, but their undisclosed use is problematic, according to relevant data protection laws. It infringes on the principle of transparency by preventing data subjects from understanding how their personal data is processed, and this also stops them from objecting to that processing. Relevant data protection authorities are cracking down on the undisclosed use of AI more and more. Let’s discuss this issue together with a case that hinges on it.

Why hide the use of AI systems?

Artificial intelligence (AI) is the ability of computer programs or other machines to process, analyse, and understand information in a way that mimics human perception, synthesis, and inference. It has the potential to benefit organisations across all industries. AI can make things more efficient. In the retail sector, for example, it can automate repetitive tasks such as inventory management or customer service through chatbots. AI can improve decision-making. In the financial services sector, for example, it can analyse large amounts of data quickly to decide when to loan people money to make better loans. The possible advantages of using AI are endless. However, there are significant dangers to deploying AI when it comes to data protection – particularly the problem of doing so secretly, without telling people.

Using undisclosed AI systems is tempting for many organisations. AI is an emerging technology, and people are wary of it, yet the business benefits are high. This fear encourages organisations to use AI behind the scenes to process personal data without telling their customers, employees or other organisations about that processing. The organisation may be worried that the data subjects will object to processing their personal data through an AI, and they may have the right to do so.

How are hidden AI systems dangerous in terms of data protection laws?

A clandestine attitude towards using AI is an undesirable situation that prevents data subjects from understanding how the processing organisation is processing their personal data. This lack of understanding, in turn, makes it difficult or even impossible, for them to assert their rights under relevant data protection laws. It also stops data subjects from understanding how the processing organisation makes decisions about them. This, in turn, takes away their power to interrogate those decisions if they think they are wrong.

Third parties provide most AIs. Using an AI involves sharing personal data with a third party, who may even transfer it across an international border. This data sharing prompts the need for data processing agreements to make those transfers lawful. It also calls on organisations to identify a legal basis for the cross-border transfer of personal data in terms of relevant data protection laws. Ultimately, the undisclosed use of AI contradicts the principle of openness and transparency that underpins most data protection laws.

Therefore, relevant supervisory authorities increasingly demand that organisations give AI the attention it deserves when processing personal data. They want organisations to stop hiding AI systems from data subjects by disclosing their use in privacy policies or notices. Article 12.1 of the GPDR, for example, says that the controller must take appropriate measures to provide certain information, including information relating to the processing, to the data subject and must do so in writing (which can be electronic).

What action have relevant supervisory authorities taken against the use of undisclosed AI systems?

In February 2022, the Hungarian Data Protection Authority (NAIH) fined Budapest Bank more than six hundred thousand euros for the use of a hidden AI system. The bank used a speech evaluation system in AI-driven software to profile customers’ emotional responses and decide which customers they needed to call back based on those responses. This use happened over three and a half years, from May 2018 until the decision date.

Benefits of using AI

The benefits of using AI for the bank were clear. The bank employed almost 180 people whose job it was to phone people. It employed another twenty people whose job it was to listen to calls. The AI system processed recordings of all calls. Based on the emotional responses in the voice recordings, it ranked which customers it was most necessary for the bank to call back. It tried to determine which customers were most upset or frustrated. This was based on the tone of their voices, what they said and how they said it. The bank had no insight into how the system had ranked those customers and which specific criteria it used.

Infringement of transparency

According to the bank, the purpose of using this software was customer retention and complaint prevention. The benefit to the customer was supposed to be an improved customer experience. The system did not store any personal information capable of identifying the customer after it had ranked them. However, the data protection authority took a dim view of the bank not telling their data subjects they were using an AI for this purpose. They believed the bank had deprived their data subjects of the ability to object to the processing. The data protection authority, therefore, held that the bank had infringed on the rights of its data subjects to be informed of how the bank was processing their personal data and to object to that processing.

Legitimate interest

The data protection authority was also critical of the bank’s reliance on legitimate interest. The bank used it as grounds for processing personal data. However, the data protection authority held that the bank had not sufficiently explored the data subject’s interests. The bank was therefore processing unlawfully without a valid legal basis.

The moral of the story is: don’t be like Budapest Bank — disclose your organisation’s use of any AI systems in your privacy policy, and make sure you have a legal basis to process personal data using that system in line with relevant data protection laws while you’re at it.

Actions you can take

  • Understand how undisclosed AI systems impact your data protection compliance efforts by asking us to draft a legal opinion.
  • Explore how your organisation could comply by consulting with us regarding artificial intelligence law and data protection law.
  • Take steps to disclose your use of AI systems by engaging us to help you draft a new or update your existing privacy policy.