The critical role of judges in algorithm use

Judges and appeals committees must be vigilant regarding government decisions where algorithms have played a role. Additionally, the government should proactively be transparent about its use of algorithms. This was advocated by Aleid Wolfsen, chairman of the Dutch Data Protection Authority (AP), during a meeting in the Week of the Rule of Law on artificial intelligence (AI) in the judiciary.

 

Wolfsen addressed the challenges that algorithms and AI present for judicial review of government decisions. For instance, why were the issues with algorithms used to assess the risk of fraud, such as those used by the Tax Authority and DUO, discovered so late? Wolfsen stated, “Much suffering could have been avoided if these organizations had been more transparent about these algorithms. And if judges had been more active and inquisitive.”

 

A first step towards greater transparency is for the government to voluntarily disclose when algorithms have played a role in making a decision or in the lead-up to it, along with a clear explanation. This can be included in the explanatory note accompanying the decision. “Judges must be able to scrutinize government decisions for issues like discrimination, even when algorithms are involved. Therefore, active transparency about the use of algorithms is crucial. This way, citizens, lawyers, and judges can be vigilant,” Wolfsen emphasized.

Looking Beyond the Dispute: the impact of algorithms on government decisions

However, transparency alone is not sufficient. It is also necessary for judges to actively examine the impact of algorithms on government decisions. Judges should always look beyond the specific dispute that brought someone to court.

 

Wolfsen explained, “The government often uses algorithms to improve efficiency by deploying them on a large scale. As a result, the potential risks—such as incorrect outcomes and discrimination—can affect a large group of people. We saw this in the Childcare Benefits scandal.”

 

“Judges should be especially vigilant at any suspicion that an algorithm has been used in or leading up to a decision, for example, for risk selection. They should examine this even if lawyers or citizens do not raise the issue. If there has been discrimination in the lead-up to a decision, it almost always affects the legality of the subsequent decision.”

 

Moreover, Wolfsen’s call for judges to actively investigate the use and impact of algorithms is crucial. This proactive stance is necessary to catch and correct potential biases and inaccuracies early, preventing harm to citizens. Given the increasing reliance on AI and algorithms in public administration, the judiciary’s role as a watchdog is more important than ever. Judges must not only address the immediate disputes but also the systemic issues that might arise from the widespread use of such technologies.

 

In conclusion, enhancing transparency and judicial vigilance are vital steps toward a more just and equitable use of AI and algorithms in government decisions. The principles outlined by Wolfsen provide a strong foundation for improving oversight and protecting citizens’ rights.