Table of Contents
In a pivotal move, EU lawmakers have approved the EU Artificial Intelligence Act, a landmark regulation designed to safeguard fundamental human rights and ensure the safe development and application of AI across the union. This groundbreaking legislation paves the way for a future where AI serves humanity responsibly.
What is the AI Act?
The AI Act is the first comprehensive legislation encompassing the placing into market, provision, and usage of AI systems. The first draft of the AI Act was published in 2021 and in the second half of 2023, the EU institutions and member states commenced trilogues to agree on its final text. On the 8th of December, after heated debates, a political deal was finally established, covering all major aspects of the upcoming AI Act.
Key elements agreed upon
- General purpose AI systems
The AI Act introduces a new concept, General Purpose Artificial Intelligence (GPAI). These AI systems can perform a wide range of tasks, such as generating text, images or sounds. The best known examples of GPAI systems are ChatGPT, Bard, and DALL-E. GPAI systems have to comply with transparency requirements, such as the production of technical documentation, compliance with EU copyright requirements, and the publication of summaries of the data used during training.
The AI Act also regulates more powerful models that may pose a systemic risk (high impact GPAI systems). Some well known high impact GPAI system providers are: OpenAI (GPT-3, GPT-4), DeepMind (AlphaGo, AlphaFold), and IBM (IBM Watson). High-impact GPAI systems are trained on a large amount of data, are powerful and complex, and their use can lead to higher risks. High-impact GPAI systems need to meet additional requirements, such as system model assessment, assessment and mitigation of risks associated with the use of the system, transparency and reporting obligations.
- Prohibited risk AI systems
Legislators agreed on the prohibited risk category, meaning that systems which pose a significant risk to citizens’ rights will be banned.
- certain biometric categorisation systems that use sensitive characteristics;
- untargeted scraping of facial images to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will and AI used to exploit the vulnerabilities of people.
- High risk AI systems
Systems deemed to be high risk (among others, critical infrastructure, medical devices, law enforcement, administration of justice, and democratic processes) will be imposed obligations, such as establishing a risk management system, compiling and updating technical documentation, complying with transparency requirements, and ensuring human oversight.
AI systems interacting with persons need to inform their users that they are interacting with a machine (such as chatbots in customer service, Snapchat, and ChatGPT). All persons must provide labels if they are using deepfakes that resemble actual persons, places etc. Additionally, users must be informed if an AI biometric categorization or emotion recognition system is used.
Non-compliance with the regulation can lead to fines ranging from up to 7.5 million or 1.5 % of turnover to 35 million euros or 7% of global turnover, depending on the infringement and size of the company
Timeline of the AI Act
The full text of the AI Act is expected to be published soon, with the Act slated to enter into force in Q1 2024. The AI Act will have a 12 to 24 month implementation period, with obligations becoming effective in steps.
How can companies prepare?
To align with the AI Act, companies should:
- Assess the AI solutions currently in use;
- Understand the obligations based on the risk level of their AI systems;
- Prepare the implementation of necessary changes in their business activities to comply with the Act.
As the EU AI Act draws closer to implementation, Hedman Law Firm is here to assist businesses in navigating the complex regulatory landscape. We can help you understand the Act better, assess your current AI operations, identify potential compliance gaps, and develop a comprehensive compliance strategy. Contact us.