ai chatbots data privacy

Using AI Chatbots Could Lead to Data Breaches, Warns Dutch Authority

The Dutch Data Protection Authority (AP) has recently flagged a growing issue: data breaches caused by employees sharing personal data with AI-powered Chatbot, such as ChatGPT or Copilot. These incidents often involve sensitive information like patient or customer details being fed into AI Chatbot, which then grants the companies behind these tools unauthorized access to this data.

 

It’s becoming increasingly common for employees to use digital assistants at work, whether to answer customer queries or summarize large documents. While these tools can save time and reduce mundane tasks, they also come with significant risks.

AI Chatbots and Data Breaches

A data breach occurs when personal data is accessed without proper authorization or intention. Often, employees use these Chatbots on their own initiative, without the approval of their employer. When personal data is involved, this constitutes a data breach. In some cases, the use of AI Chatbots is part of the organization’s policy, which, while technically not a data breach, might still be illegal under data protection laws. Both scenarios should be avoided by organizations.

 

Most AI Chatbots providers store all the data entered into their systems, meaning this information ends up on the servers of these tech companies. The individuals entering the data may not be aware of this, nor fully understand what the tech company might do with the data. Importantly, the individuals whose data is being shared typically have no knowledge of this.

The Dutch Data Protection Authority opinion

In one instance reported to the AP, an employee at a general practice entered patient medical information into an AI chatbot—against company policy. Medical data is highly sensitive and is protected by strict privacy laws, making this a severe violation of patient privacy. Another case involved a telecom company where an employee uploaded a file containing customer addresses into an AI chatbot.

 

Organizations need to set clear guidelines for their employees regarding the use of AI chatbot. Should employees be allowed to use them at all? If so, it must be made clear which types of data are off-limits. Companies should also consider negotiating with chatbot providers to ensure that any data entered is not stored.

 

If a data breach does occur due to improper use of an AI chatbot, it is often mandatory to report the breach to the AP and notify those affected. Clear communication and preventative measures are essential to mitigate the risks associated with using AI chatbots in the workplace.