Russia demonstrates how AI can be abused

Three significant gaps in the AI Act

Russia demonstrates how AI can be abused

Three significant gaps in the AI Act

The forthcoming Artificial Intelligence Act (AI Act) is one step closer to its adoption. In mid-March, as the opinion rapporteur I presented the draft opinion of the Committee on Culture and Education (CULT). Remote biometric recognition, e-proctoring, and artificial intelligence in media. These are priorities for the upcoming negotiations that must not be forgotten in the proposal.

Ban of facial recognition without exceptions

At present, we can see how the Russian regime abuses facial recognition systems to detect protesters. In Moscow, at least 180,000 cameras were installed two years ago. They abuse these systems to identify and persecute participants of anti-war demonstrations. Even before, as Amnesty International reported, Russian authorities used the same technology to monitor and detain activists and journalists involved in rallies in support of Alexei Navalny.

The European Commission proposed a ban on remote biometric identification, except for court or emergency authorization. However, we must not give a bianco check to any European government which would want to abuse such technology to track and persecute citizens. Recent revelation of the Pegasus scandal provided us with a clear proof of an appetite of Polish and Hungarian governments to spy on journalists and opposition politicians.

In light of the danger that deployment of remote biometric identification systems in publicly accessible places poses to citizens’ fundamental rights, freedom of assembly, and the work of investigative journalists, activists, and political representatives, I propose to ban deployment of such technologies without exceptions.

We cannot give free rein to governments on the edge of democracy to abuse technology to spy on the opposition, journalists, and ordinary people. That, after all, is not the Europe we want to live in.

Extension of the definition of high-risk AI applications

I also focused on the definition of high-risk AI applications in areas of education, media, and culture and on modification of certain provisions related to banned practices. The reason is the increasing deployment of AI technologies in education and training facilities.

Important is not to forget about e-proctoring systems, which are used for monitoring of students during tests, and about applications used to determine an area or a program a student should study. For example, if a student is doing a remote test in place as a student dormitory, where both audio and video are recorded, the noise may disrupt the monitoring process and misinterpret it as an attempt to cheat.

Regarding media, I propose adding to the list the creation and dissemination of machine-generated news articles and recommendation and ranking algorithms for audio-visual content. A misused AI system can contribute to spreading of disinformation.

No social scoring for companies

Another dangerous loophole in the Commission’s proposal is the absence of a ban on social scoring from private companies. This concept of social scoring is distant from European values and therefore we should say a clear no.

There is a clear threat of discrimination and exclusion of certain groups or individuals. Because of that, we must extend the ban on deployment of social scoring systems to the use by public and private entities.

Moving closer to the adoption

I am glad that we are getting closer to the adoption of the AI Act. The vote on the proposal in the Committee on Education and Culture shall take place in April. The final vote in the plenary shall take place in the fall of this year, during the Czech presidency. Now, let’s keep fighting for no loopholes in the proposal!

See also