Ten blog korzysta z plików cookies na zasadach określonych here

An EU-centric country with a Kafkaesque discriminatory algorithm targeting its citizens. The story of how a lack of oversight of new technologies brought down the government in the Netherlands.

The technology used by the government to deter social crime and improve the efficiency of government offices can lead to unwarranted exclusion and violations of privacy rights. The Prime Minister of the Netherlands, who resigned from the government as a consequence of a malfunctioning algorithm, experienced this.

There has been a lot of public discussion recently about the benefits and risks of using advanced algorithms and artificial intelligence. However, these debates tend to be quite theoretical in nature, at least until something like the recent case in the Netherlands happens.

Consequences of “Unprecedented Injustice”

In December 2020, the Dutch Parliamentary Commission of Inquiry issued a report entitled “Unprecedented Injustice.” The authors pointed out that since 2012, approximately 20,000 families have been wrongly accused of fraudulently collecting child welfare benefits. The accused parents were subjected to immediate bailiff enforcement and forced to return the collected funds. As the tax authority did not provide reasons for the withdrawal of the right to the allowance, the investigative committee concluded that the authorities’ practices were “violations of the basic principles of the rule of law.”

Mark Rutte, Prime Minister of the Netherlands, publicly apologized for the unfair accusations and set aside 500 million euros to compensate families affected by wrongful executions. However, this did not quell the public outcry and in order to avoid a scheduled vote of no confidence in the government, Mark Rutte tendered the resignation of his government to the King on January 15, 2021.

The problematic algorithm

The Dutch tax authority issued decisions to terminate the right to benefits based on indications from the System Risico Indicatie (SyRI) – for risk detection introduced by a 2014 law. The system was used by government organizations to prevent fraud in the areas of social benefits and taxes. SyRI used an algorithm that analyzed data provided by public institutions and generated reports on potential risks based on profiles of individuals with criminal records for social security fraud. The system then located people with similar profiles in the records, identified them as potential fraudsters, and placed them on a list for further investigation. The individuals involved were not informed when their names were placed on the risk registry.

The system used data on jobs, fines, penalties, taxes, property, education, pensions, debts, benefits, permits, and exemptions. In 2014, the Council of State, in its negative opinion on SyRI, stated that “there is almost no personal data that cannot be processed.” Moreover, SyRI has been used primarily in indigent communities.

Chris van Dam, chairman of the commission of inquiry, pointed out that the system was non-transparent and that the government had kept its rules strictly secret. The model used, the risk indicators or the data used were not publicly available. Nevertheless, the Dutch tax authority admitted that 11,000 families with dual citizenship were subject to special scrutiny.

The path to banning SyRI

The above features of the system and the impact that its indications have had on the lives of those wrongly accused of fraud caused Dutch civil rights organizations to file a lawsuit against the Dutch state in 2018 demanding that it discontinue the use of SyRI. Philip Alson, the UN Special Rapporteur on extreme poverty and human rights, also expressed his concerns about SyRI in a letter to the Dutch court.

On top of that, on November 29, 2019, SyRI won the Big Brother Award from Bits of Freedom, a Dutch digital rights organization. This award is given for the largest invasion of privacy in a given year. Director General Carsten Herstel accepted the award on behalf of the Ministry of Labour and Social Security and stated that he “finds it logical that the government is alerted when someone receives benefits for renting an apartment and also owns a house.”

The ECHR infringement regime

On February 5, 2020, a court in The Hague ordered the immediate cessation of SyRI, pointing out that the system violates the right to respect for privacy and family life protected by Article 8 of the European Convention on Human Rights. It said that the lack of transparency and verifiability of SyRI reports makes the interference with citizens’ private lives disproportionate and cannot be justified by the protection of the public interest. The court emphasized that the government has a special responsibility to ensure a proper balance of private and public interests if it decides to use new technologies.

The aftermath of the Dutch scandal

The Dutch government, which will be formed after an early parliamentary election in March 2021, will have to develop a new system to effectively prevent social security fraud while ensuring an adequate level of privacy protection for citizens. Nevertheless, it is conceivable that the consequences of the Dutch court’s ruling will not be limited to the Netherlands.

There are several reasons that have proven to be crucial to the failure of this Dutch technology project. First, is the lack of transparency in the operation of the algorithm. Of course, the government defended itself by saying that the mechanics and the premises taken into account must be confidential because otherwise, it would be easy to circumvent. However, there is a difference between the total openness of the mechanism and the transparency of the assumptions. Second, accountability and oversight. One can and sometimes should trust technology, but this trust should be limited. Especially in such a sensitive use of technology, one cannot act unreflectively while making it difficult – at various levels – to verify the performance of the algorithm. Finally, the possibility of an effective appeal of the decision and, even before that, the right of citizens to be informed about being subject to automatic profiling have also failed here. Finally, according to at least some sources, the very process of teaching the algorithm (“feeding it with data”) was far from ideal – because it did not have built-in fuses that should counteract discrimination (in this case, it was, among others, about foreign-sounding first names).

In deciding to use new technologies to improve the functioning of the state, foreign governments will also need to take into account fundamental human rights, particularly the right to the protection of private life, and ensure that these values are not violated. Given the complexity of technologies such as artificial intelligence, ensuring a sufficient level of transparency can be challenging. It is to be hoped that cases such as the one in the Netherlands will only intensify the work in the EU on the regulation of artificial intelligence and advanced technologies and that the blade of sanctions and requirements will be aimed as much at business as at government.

#artificial intelligence #new technologies #privacy #report #SyRI #technology #UE

Would you like to be informed about the latest blog posts?

  • - Just provide your e-mail address and receive notifications about the latest posts on the SKP/IPblog blog directly to your inbox
  • - We will not send you spam messages

The administrator of your personal data is a SKP Ślusarek Kubiak Pieczyk sp.k. with its registered office in Warsaw, at ul. Ks. Skorupki 5, 00-546 Warszawa.

We respect your privacy, therefore the data provided to us will not be processed and made available outside the SKP for purposes other than those included in the Terms of Service. Detailed provisions regarding our IP Blog, including a catalog of your rights related to the processing of personal data, can be found in the Privacy Policy.