Automated Decision Making

Automated Decision Making

On Wednesday, February 2, 2020, the European Parliament adopted a resolution on Automated Decision Making (ADM) processes. In light of the upcoming Commission strategy on Artificial Intelligence (AI), the European Parliament sent a clear political message to the European Commission that, even though we strongly support the development of new technologies, the Parliament’s top priority has always been and will remain protection of consumers as well as fair treatment of all individuals.

What it means for you

Technologies based on ADM processes—also known with the more popular name as Artificial Intelligence (AI)—are present everywhere in our lives: your job application software, chat bots dealing with your requests, virtual assistants helping with your complains in a food ordering websites. While they have the potential to speed up certain services, their application in dynamic pricing might have discriminatory impacts. One clear case of price discrimination happened few years ago, when a hotel-booking platform implemented a system of dynamic pricing, actively offering users of more expensive mobile devices pricier hotel deals, predicting their stronger purchasing power. Another example are job-searching platforms. In case a biased, wrongly designed algorithm is used, deciding to favor men over women or national candidates over other nationalities regardless of their capabilities, we might end up with having thousands of CVs excluded from the selection even before they reach an HR agent.

Why it is important to support such a message

I believe that transparency of the system is the key aspect in securing high-level consumer protection. Very often, visitors of websites do not realize that ADM systems are used and are not cautious for potential harms. Private entities and public authorities having such systems implemented should proactively inform customers in order to prevent possible mistreatment by inappropriate and unfair automated decisions. They should facilitate—to the greatest possible extent—access to the human verifying the results with decision-making powers, who can investigate seemingly wrong evaluations. Furthermore, given the complexity of ADM systems, the burden to prove that the decision taken or the whole system is harmful should not be imposed on the customer. As such, it needs to be the providers’ responsibility to demonstrate that their systems are operating accordingly and functioning in a non-discriminatory manner.

What should happen next?

Even though developers and users of ADM should be obliged to document thoroughly the software and used datasets, we cannot rely only on the information the companies provide us with. At a minimum, authorities should be empowered to audit algorithmic decision-making processes put in place. Nevertheless, I would like to see more ambitious proposal from the Commission advocating for usage of Free and Open Source Software. That would by design grant access other stakeholders, such as researchers and civil society, too. 

Another aspect to take into consideration is a possible chilling effect on fundamental rights on-line when ADM systems are used for content filtering and moderation systems. The development of new technologies and platforms redefines the way citizens access knowledge and impart information on-line. Even though Pirate Parties have been systematically fighting against filtering technologies, filters in various forms are voluntarily set up by private companies. I seek to introduce a fundamental rights impact assessment to the proposals to automated content moderation.

The discussion on the relation between AI and human rights has been in a spotlight on various levels. The UN Special Rapporteur to the General Assembly published a report in 2018 on AI and its impact on freedom of opinion and expression and the Council of Europe Committee of Ministers issued recommendations on impacts of Algorithms on Human Rights. That said, I am convinced that we must look for ways to increase transparency of functioning of such tools and develop a framework for human rights audit, which I am systematically supporting in my policy proposals.

Photo credits

unsplash-logoFranck V.

See also