Alghorithms discrimination and privacy monitoring

In 2023, the benefits scandal (Toeslagen affaire) advanced with hearings led by the parliamentary committee on Fraud Policy and Services. Despite these sessions, it became evident that little had changed. This lack of progress was unexpected, especially given the frequent summoning of the Dutch Data Protection Authority (AP) by the committee to address previous shortcomings. However, the AP continued to confront challenges related to algorithms and discrimination, as outlined in their 2023 annual report.

 

Following an investigation into the benefits scandal, the Data Protection Authority (AP) found that the Tax Authority’s approach to childcare benefits was illegal, discriminatory, and violated privacy laws. Despite this, the AP is still heavily occupied with overseeing algorithms and potential discrimination.

 

In 2023, it became clear that several government agencies continued to use poorly designed algorithms. Examples include:

 

  • The Education Executive Agency (DUO) used an algorithm to detect scholarship fraud that was discriminatory and lacked justification.
  • The Employee Insurance Agency (UWV) illegally employed an algorithm to identify unemployment benefit fraud.
  • Concerns have been raised about the police’s use of facial recognition technology and the operations of the Public Order Intelligence Team (TOOI).

AP Chairman Aleid Wolfsen remarked, “This likely represents only a small fraction of the issue. The government’s insatiable appetite for data appears unchecked. While algorithms and AI can offer significant benefits, such as increased efficiency in government processes, we must remain vigilant about their risks, including discrimination. It’s essential to ensure that government institutions do not once again devastate lives and that we protect our legal system and fundamental rights.”

Supervising algorithms and artificial intelligence

Monitoring the utilization of algorithms and AI plays a crucial role in protecting individuals from potential hazards. In the Netherlands, this responsibility is distributed among diverse entities, such as regulatory bodies and national inspectorates. To enhance this oversight, the Data Protection Authority (AP) was appointed as the central supervisor for algorithms and AI starting in January 2023. This role includes identifying and assessing risks that cut across various sectors and domains, as well as fostering collaboration among the pertinent oversight organizations.

 

Regulation of major technology firms

Cooperation also strengthens oversight of major technology corporations. The AP collaborates closely with other European privacy regulators to ensure tech companies are held responsible for privacy breaches. For example, in 2023, the Irish regulator imposed a fine of 345 million euros on TikTok, following an investigation initiated by the AP that prompted additional measures by the Irish authorities after TikTok established its European headquarters in Ireland.

 

Conclusion

Organizations should approach algorithms carefully. Make sure to perform an assessment that clearly identifies the risks involved so mitigatory measures can be implemented.

 

If your organization has questions about the privacy implications of an algorithm that the organization wishes to implement, feel free to contact us via info@dpoconsultancy.nl