Last month, the European Parliament voted on several reports regarding the future legislation on Artificial Intelligence (AI). Namely, these were reports on the framework of ethical aspects of AI, robotics and related technologies, on the civil liability regime for AI, and on so-called intellectual property rights for the development of AI technologies. I voted against all these reports for numerous reasons. Let’s take a closer look at them.
In many respects, these reports reacted to the white paper on AI: a European approach to excellence and trust, which the Commission published in February. You can read my analysis of the white paper in one of my previous blog posts.
Ethical aspects of AI
I am fully aware of the great potential of AI and I support the report in defining safeguards for development of AI. Moreover, a definition of ethical standards for the use of new technologies is of equal importance. We need human-centered technologies.
The future legislation must safeguard protection of personal data and guarantee supervision of technologies by public authorities in close cooperation with civil society at every stage of development. We also need to make sure that the use of AI does not lead to discrimination. Therefore, transparency of algorithms and training data is key and Free and Open Source Software is best positioned to meet such criteria.
Conversely, I cannot support the report in accepting the use of remote biometric surveillance technologies—such as facial recognition—in public spaces. The report clearly fails at acknowledging the severity of the risks that such use unevitably carries. No safeguards can make indiscriminate mass surveillance and the chilling effect that comes with it acceptable. Unfortunately, the report fails to clearly reject these policies.
Civil liability regime for AI
We need a future-oriented civil liability legal framework that provides confidence in safety and reliability of AI products and services. Hence, I support the report in the effort to clarify the definition of AI as well as in calling for the inclusion of both material and immaterial harm in the scope of future legislation.
In addition to efficiently and fairly protecting potential victims of harm or damage, however, the future framework at the same time has to provide legal certainty for development of new technologies. Special attention require Free and Open Source technologies, which underpin innovation. They often start as non-commercial projects with a lot of contribution from volunteers, which are later productized by commercial entities in a form of a product or a service. Therefore, I insist on enabling all affected persons to bring forward liability claims throughout the commercial chain of producers. Thus, I oppose the report in its vague definition of backend operators, which doesn’t clearly exclude non-commercial backend operators. Compensation provisions must be limitated to the commercial chain.
So-called intellectual property rights (IPR) for the development of AI
In order to unlock the potential of AI technologies, it is necessary to remove unnecessary legal barriers. We must foster—not hamper—innovation in the Union. Therefore, I support the call for an impact assessment with regards to the so-called IPR protection in the context of the development of AI technologies.
Having said that, the objective should not be to add new layers of intellectual monopolies and we should not call for additional restrictions. Only this way, AI made in Europe—as called for by the Commission—can really happen. Intellectual monopolies should not be granted to the detriment of open innovation and knowledge sharing.
Therefore, I cannot support the report, which overemphasizes the role of the patent system as the primary way to incentivize AI inventions and to promote their dissemination, as well as the key role of standard essential patents. Aggressive patent litigation by patent trolls is increasing and constitute a threat especially for European SMEs and Free and Open Source projects. Finally, calling for mandatory intellectual monopolies for works generated by AI is not just out of scope of the report. It also disrespects the purpose of this concept, which is to provide incentive to innovate.
AI is a strategic technology that will change all aspects of our lives: how businesses work, how kids learn, how the police and judicial authorities search criminals, etc. Therefore, we have to be extremely cautious about the legal framework in which the technology will operate. On the other hand, the regulatory structure and environment must not hinder future development.
It is our shared goal to create a digital—but also informed and educated—society which perceives AI as a unique opportunity to improve quality of our lives. However, we have to understand, realize, and react to the potential threats.