Leaked Commission's plan on artificial intelligence

Four significant gaps

Leaked Commission's plan on artificial intelligence

Four significant gaps

The proposal of the Regulation on the European Approach for Artificial Intelligence was not to be published until April 21. However, last week, Politico publishedleak of a regulation draft. Therefore, we can look at it under scrutiny now and analyze the key points that need to be changed.

What is going on?

The new legal framework should be given to artificial intelligence (AI). Although its use brings many benefits to the society; on the other hand, risks and threats as well. In February 2020, the European Commission presented a White Paper on Artificial Intelligence: A European Approach to Excellence and Trust.

As a follow-up to the Commission’s White Paper and as the next step in the legislative process, Margrethe Vestager, Executive Vice-President of the European Commission for a Europe fit for the Digital Age, is due to present a formal proposal for a regulation on April 21. What are the four main significant gaps of the proposal?

1. What “artificial intelligence” are we talking about?

In the current form of the regulation, the Commission divides artificial intelligence systems into two categories: high-risk and everything else. It is clear that autonomous school buses are at a different level of risk compared to spam filters. However, as there are different levels and types of risk which artificial intelligence can expose us to, it is not sufficient to draw only one line between these two types of use.

There are multiple applications. Some of them may not have the same seriousness as high-risk—with potential impact on the physical well-being of people—however; can, nevertheless, have an important impact on people’s lives.

Let’s take another example. Companies use techniques considered artificial intelligence by the regulation for content moderation on their platforms—e.g. hash-matching, keyword filters, natural language processing—in order to remove content that goes against their terms and conditions or that is illegal. As a side effect, this often results in such technologies discriminating against vulnerable groups in a way that their content gets de-prioritized or omitted. This can be due to the challenge of assessing context by algorithms or use of data sets that incorporate discriminatory assumptions. For now, the proposal does not qualify it as high-risk. However, it can have a crucial role in censoring content of some groups of people or types of content.

We should adjust the division of categories because artificial intelligence systems can not be divided only into high-risk and “the other”. It is more about the use and possible consequences than about the technology.

2. Practices endangering fundamental rights must be banned, without exceptions

The aim of the regulation is to create a legal framework that prevents discrimination, prohibits practices that violate fundamental rights, or endanger our safety or health. In its current form, the regulation forbids a number of uses of artificial intelligence, which really should be banned. These include AI systems that manipulate human behavior and predictive AI systems that target vulnerabilities. That is the right call; however, there is a catch: the exceptions.

These prohibitions do not apply to EU governments and public authorities if they are implemented in order to safeguard public security, and provided that they are in line with EU law. For instance, public authorities would be allowed to monitor users’ location, private communication and activity on social networks, and all the records and tracks that users leave behind in the digital world, if this condition is met. This can easily be used for mass surveillance of citizens, especially in countries where the independence of the judiciary is at stake. For example, the Hungarian government persecuted journalists in the so-called interest of national security for questioning the government’s actions amid pandemic. Even the Chinese monitoring system is based on the alleged purpose of ensuring security. That is why it is necessary to set a precedent that will not be able to threaten us, even if (and especially if) democracy is not in its best shape.

These concerns were repeatedly raised by an open letter signed by 116 Members of the European Parliament last month calling on the European Commission to address the risks posed by high-risk AI applications that may threaten fundamental rights. At the same time, citizens raised the same request during the public consultation.

3. High-risk applications should go through a third-party conformity assessment

The proposal includes a definition of so-called high-risk artificial intelligence systems. That would be for example the Dutch welfare surveillance system, which aimed to predict the likelihood of an individual committing benefit or tax fraud, or violating labor laws, and that was already put on halt because of human rights violation. It includes also HR tools that could filter applications, banking systems that evaluate our creditworthiness (i.e. the ability to repay debts), or predictive control systems that are extremely risky to reproduce bias and have a negative impact on disparity. In short, these tools can have a serious impact on people’s lives.

According to the proposal, many of them should be subject to mere self-assessment, i.e. their own risk assessment. Although I am glad that—in accordance with the opinion of the European Pirates—the draft regulation classifies certain artificial intelligence systems into the category of high-risk systems, self-assessment of conformity is not sufficient verification and should be held by a third party.

In the case of its self-conformity assessment, the risk assessment of the product would be the responsibility of the provider of an AI system directly and would not have to be officially approved by a competent authority. After preparing the technical documentation, providers would perform the conformity assessment themselves. If they evaluate that their high-risk artificial intelligence system complies with the requirements of this regulation, they shall declare the system approved.

4. Opennes of the system helps preventing mistakes

Users, academics, everyone should have the right to understand the underlying logic of the artificial intelligence system they use. Currently, only non-technical documentation needs to be published. According to the current form of the regulation, all parties must be protected, in terms of commercially confidential information, trade secrets, including so-called intellectual property rights, unless their disclosure is in the public interest.

This level of confidentiality goes against the call to give access to anyone who would like to know how the system works. Companies should be encouraged to release the training code and data sets under a free and open license, as well as design such systems in a transparent manner. This would allow more insight into how the system works and help with addressing many of the problems. It does not make sense to publish only technical and business documentation for example, when its authenticity cannot then be verified by looking at the code and operation. Data sets should be auditable by authorities and civil society, which is especially needed in case of a system which is constantly learning. Free and Open Source Software is best suited for that.

What are the next steps?

My aim in the European Parliament is to promote innovation while ensuring security and privacy of citizens. The legislative framework for the use of artificial intelligence is undoubtedly needed and is a good step forward. We need to set clear rules that will not be easy to get around. Avoid any loopholes that could be abused. Technological progress must not be a step backwards from a fundamental rights perspective. Following the official presentation of the proposal, the legislation will be transferred to the European Parliament and the Council. As soon as the proposal comes to “our table”, I will work to bring the necessary adjustments to the regulation.

See also