top of page
Thierry Spanjaard

Biometrics and ethics… a long way to go

Since biometrics have been around, there has been discussions on how ethical biometrics can be defined and designed. The case of facial recognition is event more significant.

We all know what unethical facial recognition is. It is clear to most of us that mass surveillance programs or citizen scoring, as is done in China, does not qualify as ethical facial recognition.

One of the steps in defining what is ethical is setting up the difference between face recognition, i.e. finding one face among a large database and face authentication, which is verifying that a proposed face matches the one (and only one) stored in support. Typically, many of us are using face recognition such as Apple's FaceID to unlock our phone and we do not see any issue with this. Also, eGates installed at immigration border posts that compare a picture stored in one's passport with the face of the person in the cabin do not pose any issue. At the other end of the spectrum, comes mass surveillance, law enforcement without control, etc.


The use of facial recognition as a means of payment as is implemented on a large scale in Moscow metro is more controversial: while those who feel comfortable with the system should be free to enroll and use it, there should always be an option allowing to use a mass transit system without being identified.

Discussions abound trying to define ethical face recognition and many authorities feel there is a need to regulate the topic. The European Parliament calls for a ban on police use of facial recognition technology in public places, and especially on predictive policing, the technologies that combine facial recognition and AI to profile potential criminals even before a crime is committed. The Parliament also calls for a ban on facial databases like the ones used by Clearview. Unsurprisingly, the Parliament pushes for a ban on social scoring, like what is in use in China.


At the same time, the European Commission is also working on the Artificial Intelligence Act (AI Act) that aims at finding the right balance between EU values on fundamental rights protection including privacy, data protection and data sovereignty on one side, and public security on the other side. Especially the proposed Act will ban completely systems that “serve for general purposes of social scoring or are used for running real time remote biometric identification systems in publicly accessible spaces for law enforcement purposes.” The proposed Act also defines high-risk AI systems used in toys or medical devices, systems used to assess creditworthiness or in recruitment processes; these high-risk AI systems will require an evaluation of the technology before being allowed to access the EU market.

Many analysts consider that Europe has lost ground in AI development, especially in comparison with China or North America. Therefore, the Commission also aims at promoting Artificial Intelligence and support its development. This has taken the form of an action plan that backs the development of AI in climate and environment sector, healthcare, robotics, mobility, agriculture etc.


The balance between regulation and technology development is hard to define. We all want to ensure our privacy is protected but we should not hinder our industry developments. Governments, and especially the European meta-government, and technology development never go at the same speed. There will be more work to be undertaken by European authorities before they reach an agreement and the proposed AI Act reach fruition.

331 vues

Comentarios


Recent Posts
Archives
Rechercher par Tags
Retrouvez-nous
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page