Last week, digital rights and consumer protection organizations around the world called for a global ban on facial recognition technologies. As I already wrote in one of my previous articles, these systems can pose a risk to fundamental rights as they enable mass and discriminatory surveillance by both governments and corporations. Especially, the pandemic has opened the door to data collection and tracking on an unimaginable scale. How?
NGOs raised their voice in favor of the ban
The use of facial recognition technology is becoming widespread. However, along with everyday applications like unlocking phones, it’s increasingly being used by governments and companies to surveil people, whether by law enforcement to scan public places for criminals or by grocery stores claiming to use it to count the people or catch thieves. Although I am in favor of the digitization of our society, technological progress must not be a step backwards in terms of civil rights. These systems could step over this limit.
A major coalition of digital rights and consumer protection groups across the globe—including Latin America, Africa, and Asia warn us about this threat. They call for a global ban on biometric recognition technologies that enable mass and discriminatory surveillance by both governments and corporations. In an open letter, 170 signatories in 55 countries argue that the use of technologies like facial recognition in public places goes against human rights and civil liberties.
Partly, it is a response to the Commission’s proposal of Regulation for Artificial Intelligence, which has been published earlier this year. Although the bill restricts the practice, it does not prohibit it outright. Under certain circumstances, use by authorities is allowed; however, the underlying conditions are vaguely defined. Activists fear that such loopholes in the bloc’s Artificial Intelligence bill could allow for widespread facial recognition beyond Europe’s borders.
So how do the facial recognition systems work and why using it is a risk?
A facial recognition system is a computer program that identifies people from photos or videos. It recognizes geometric characteristics of faces and compares them with database records. Unlike simply counting people in the store, this more advanced technology can recognize not only wearing a mask, but also glasses, a beard, and even estimate a person’s age or gender. It can then collect biometric data that can be used to identify people in a similar way as DNA or fingerprints.
In the European Union, almost every single country used some version of facial recognition technology. For instance, the Dutch police use it to match photos of suspects to a criminal database. The French government is a fan of using Artificial Intelligence to track “suspicious behavior”. Its use is also known in the United Kingdon. The London Metropolitan Police uses live facial recognition to match faces to criminals in a database. Further, during the pandemic, their use increased. Companies have set up cameras equipped with Artificial Intelligence to check if workers and customers comply with social distancing rules or to monitor mask wearing. Sneaking people in public using automated systems is inadmissible.
If AI systems end up in wrong hands, they can very simply restrict the freedom of all of us, as we can see in undemocratic China. These systems are the main pillar of their social score system. For example, imagine when you take part in a demonstration where the camera connects your face to a specific person, your social score will automatically decrease in the system. As a result, you are not eligible for a loan or you can forget about a job promotion. Even though the camera use by traders seems harmless, it also poses a risk. Why?
Data is a commodity of today that we must protect
Facial recognition systems can detect anyone who appears in front of them. It could be argued that e-shops have been using our data for a long time. So, why can’t we collect them in the real world? For several reasons.
When shopping on-line, there is a possibility to be anonymous to some extent. Thanks to the General Data Protection Regulation (GDPR), we can refuse to consent to some extent to the collection of our personal data and refuse the collection of cookies, although protection in this form is often insufficient. We can also use private browsing or Adblock. And when it comes to collecting “anonymous demographic data” about customers, we must not forget that the same was said by researchers at Cambridge Analytica.
Informed consent is an important step in line with GDPR. In the real world, however, we are far from providing data voluntarily. If we are monitored by this system, we cannot decide, whether we consent to the submission of data. For example, unlike systems where our fingerprint is collected.
Discrimination, whether intentional or due to system imperfections
Although these systems are still in development and technology is leaping forward every day, they still have a high error rate. There have been cases where, for example, a person had a light or dark skin and the system did not let them go during passport control, as they could not recognize the exact facial features and connect the person with the photo on the passport. Not to mention the risk of errors when comparing faces in crowds and then assigning them to a specific person.
Misuse of our data can also lead to discrimination based on race, gender, age or ethnic group. These groups can be systematically monitored, journalists or political opposition can be persecuted, as we see in many undemocratic countries. In addition to China, for example, Azerbaijan and Bahrain, through social networks, identify opponents of the regime, political opponents or members of the LGBT community, which they then persecute. Authoritarian regimes in Europe are also a danger. They could use these systems to persecute independent journalists and prevent investigations.
Clear no to facial recognition systems are also given by their manufacturers. Already in 2019, Axon, the world’s largest corporate supplier of cameras for police authorities, announced that any of their products would not use facial recognition technology because it was too unreliable for law enforcement and “could exacerbate existing inequalities in policing, for example, by penalizing black or LGBTQ communities”. Similarly, after last year’s protests in the United States, Amazon declared a one-year moratorium on police use of the technology, along with calls for the necessary legislation. Also, IBM terminated their facial recognition technology business, citing the risk of human rights violations.
We must set a clear line
In conclusion, yes, let’s use new technologies and systems. But we must realize that there is an imaginary boundary that does more harm than good. And we should not exceed that at any cost.
Hopefully, our calls will be heard and the final version of the Regulation will ban the use of facial recognition systems completely. The legislation should set the limits for the use of Artificial Intelligence technologies so that they do not jeopardize our rights. As for facial recognition systems, they are far beyond this line and their general use must be completely banned. Widespread snooping, whether on-line or off-line, is unjustifiable. We should look for ways to protect our data, not tools to collect even more.