The GDPR classifies biometric data as a type of special category of personal data. This means that you may not process biometric data. Even so, the GDPR (General Data Protection Regulation) allows you to process special categories of personal data if your processing falls within one of the lawful reasons for processing. Examples of these are: processing with the explicit consent of the data subject; or where processing is necessary for reasons of substantial public interest.
GDPR’s global jurisdiction encompasses entities processing EU residents’ data, necessitating compliance even for non-EU organizations handling biometric data of these residents. Entities handling biometric identifiers must establish legal bases, prioritize transparency, and respect individual rights. It all implies obviously also to the EU organizations.
Biometric data refers to the unique physiological or behavioural characteristics of an individual, utilized to ascertain or authenticate identity against a pre-existing template.
- Fingerprint: Distinctive impressions and ridges present on fingertips.
- Facial recognition: Evaluation of facial attributes such as the distance between eyes, nasal configuration, and jawline contour.
- Iris recognition: Inspection of the iris’s exclusive patterns.
- Voiceprint: Assessment of vocal attributes such as tone, pitch, and enunciation.
- Retina recognition: Examination of vascular configurations at the posterior of the eye.
- Hand geometry: Analysis of hand dimensions, form, finger length, and breadth.
- DNA: Scrutiny of individual genetic information.
Examples of the GDPR cases involving biometric data
The Spanish Data Protection Authority fined supermarket chain Mercadona €2,520,000 for illegally using facial recognition software in 48 of its stores in Spain. The system was designed to identify individuals with criminal records or restraining orders, but it also collected images of all customers, including children and employees, without lawful consent. The processing also did not comply with privacy principles, including necessity, transparency, and privacy by design.
The Swedish Data Protection Authority (DPA) fined a school for taking attendance through facial recognition technology. This was because the reason for processing biometric data did not fall into one of the allowed reasons under the GDPR. The school got parental consent to use facial recognition technology, but the DPA found their consent defective and ‘forced’ because of the imbalance of power between the school and the parents. The GDPR states that, if possible, you should get data using less intrusive means.
The Dutch DPA issued a 750 000 euro fine for unlawful processing of employees’ biometric data. The company used the data to take its employees’ attendance and time registration. The grounds for processing the biometric data were disproportionate and did not qualify under the exceptions in the GDPR.
The French DPA fined Clearview AI (Artificial Intelligence) 20 million euros and ordered it to stop collecting and using data on individuals in France without a legal basis. Additionally, Clearview AI was ordered to delete the data already collected. It collected over 20 billion photographs online, including images from social media, to build a “biometric template”. This means that it collected sensitive information about people’s physical characteristics. The vast majority of people whose images were collected into the search engine were unaware of this feature. The French DPA found that Clearview AI breached several articles under the GDPR. It gave Clearview two months to comply. Lastly, the committee imposed a penalty of 100 000 euros per day for any delays beyond these two months.
The rules and related cases clearly demonstrate that the use of biometric data may be done only for very limited purposes and under the restrict legal bases. Businesses and public authorities must always assess whether such processing is absolutely necessary or is there another way to reach the same outcome. Never forget also the transparency principle.
Biometric data and AI in EU – the near future
As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI and may evoke some A minor tones among digital industry.
EU Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes. Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.
AI Act: different rules for different risk levels
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.
Unacceptable risk
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition
High risk
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Generative AI, like ChatGPT, would have to comply with transparency requirements:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
Limited risk
Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
Next steps
The talks will now begin with EU countries in the EU Council on the final form of the law. The aim is to reach an agreement by the end of this year. This means that if your business is to provide such related services or you use such services you should keep a close look on how it will be regulated in near future in order to keep your business compliant.
Should you have further questions, please contact our Specialist Data Privacy Counsel, Andres Ojaver.