A group of more than 20 civil rights organizations has signed a letter urging the European Commission to prioritize human rights in formulating the upcoming guidelines for implementing the AI Act, including better measures for biometric systems – one of the most controversial issues within the legislation.
The AI Act guidelines are designed to specify the practical implementation of the rulebook, which entered into force in August last year. Rights groups have highlighted guidelines related to remote biometric identification, biometric categorization according to race, gender, and other markers, scraping facial images from the internet, and emotion recognition.
All of these AI use cases are considered to pose an “unacceptable risk” to fundamental rights and are banned, according to the AI Act. The law, however, makes exceptions for specific circumstances, including for law enforcement purposes.
The AI Act should specify that the development of remote biometric identification for export falls under the ban. The organizations also say that it shouldn’t be enough for police forces to put up a sign or distribute flyers saying that an area is surveilled to ensure the legality of biometric surveillance. Finally, the groups call for a ban on retrospective biometric identification (RBI).
“While we continue to call for a full ban on retrospective RBI by private and public actors, we urge that the ‘significant delay’ clause should be at a minimum of 24 hours after capture,” the groups say.
The current ban on non-targeted scraping of facial images leaves room for problematic loopholes: Systems like Clearview AI or PimEyes, which claim to store only biographical information or URLs and not the actual facial images currently fall outside of the prohibition and the Commission should consider deleting the proposed definition of a facial image database to prevent this.
The biometric categorization ban should be expanded to include categories such as “ethnicity” and “gender identity.” The civil rights groups also expect that companies will try to masquerade emotion recognition products as health and safety tools to escape the ban and urge EU lawmakers to clearly define the difference between these systems.
These specifications should be included in the AI Act guidelines to prevent the weaponization of technology against marginalized groups and the unlawful use of mass biometric surveillance. The EU should also ensure that future consultations related to the implementation of the rulebook give a meaningful voice to civil society and impacted communities, the groups conclude.
The signatories include Privacy International, Access Now, European Digital Rights (EDRi), AlgorithmWatch, Amnesty International and more.
The European Commission’s guidelines for national authorities and AI providers and deployers are set to be released in early 2025. In December, the European AI Office concluded a consultation aimed to define AI systems and the prohibited AI practices used to formulate the guidelines.
Related Posts
Article Topics
AI Act | biometric identification | biometrics | facial recognition | legislation | real-time biometrics | regulation | responsible AI | video surveillance
Latest Biometrics News
The United Nations Development Programme has selected Laxton to provide hundreds of Biometric Citizen Registration (BCR) kits for Honduras. The…
A major leadership change has been kicked off at Thales Digital Identity & Security and the International Biometrics and Identity…
Global fintech platform iCapital has entered a definitive agreement to acquire U.S.-based Parallel Markets, which provides reusable identity tools for…
The Federal Trade Commission (FTC) Thursday issued notice that it finalized substantial changes to the Children’s Online Privacy Protection Act…