EU Legislatures Agree on the World’s First Comprehensive AI Regulation
10 December 2023
On 8 December 2023, the Council of the European Union and the European Parliament have reached a provisional agreement on the Artificial Intelligence Act (respectively, the “Provisional Agreement” and “AI Act“), setting a precedent as the first comprehensive legal framework for AI in the world.
The Provisional Agreement follows marathonic talks between the legislative bodies in the recent weeks. Last year, the Council adopted its position concerning the draft AI Act, as was initially presented in 2021 by the EU Commission, and this position was negotiated before reaching the agreement.
This update aims to provide a concise overview of the key aspects of this Provisional Agreement, comparing to the above-mentioned earlier draft legislation and Council’s positions, and its implications for businesses and other stakeholders, ahead of the publication of the final regulation.
Key Elements of the Provisional Agreement
Definition and Scope:
- The Provisional Agreement aligns the proposed AI Act’s definition of what would be regulated as “AI system” with the OECD’s approach, ensuring clear differentiation from simpler software systems.
- It clarifies that the AI Act does not apply to areas outside the scope of EU law and exempts military, national security, and research and innovation applications from its scope, as well as non-professional uses.
Classification of AI Systems:
- AI systems are classified based on their potential risk, with high-risk systems subject to more stringent rules and low-risk AI systems face minimal obligations, primarily transparency related, such as disclosing that the content was AI-generated.
- The Provisional Agreement includes clarifications and adjustments to the requirements and obligations which high-risks AI systems are subject to, in order to make them more technically feasibleand less burdensome. Some of these adjustments include more lenient requirements for small and medium-sized enterprises (SMEs).
- The Provisional Agreement also provides further clarification regarding the allocation of responsibilitiesand roles of the various actors in the AI system value chain, particularly with respect to the responsibilities of AI systems providers and users, and the interplay between the responsibilities under the AI Act and other responsibilities under existing EU legislation.
- Prohibited AI Practices: Certain AI uses, deemed too dangerous, are banned from the EU, according to the draft AI Act. Following the position adopted in the Provisional Agreement, it will ban, for example, cognitive behavioral manipulation, the untargeted scrappingof facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.
Law Enforcement Exceptions:
- The Provisional Agreement introduces special exemptions for law enforcement authorities. For example, law enforcement authorities will be permitted to use high-risk AI tools in urgent situations, even if these tools have not undergone standard conformity assessments. In parallel, additional safeguards will be put in place to protect fundamental rights and preventing potential AI misuse in law enforcement contexts.
- Although generally prohibited, the use of real-time remote biometric identification systems in public spaces by law enforcement authorities will be allowed under strict regulation. The deployment of such systems will be restricted to exceptional circumstances, such as identifying serious crime victims, preventing imminent threats like terrorist acts, or locating suspects of major crimes.
General Purpose AI Systems and Foundation Models:
- The original draft of the AI Act did not address the aspect of general purpose AI systems. After including provisions addressing this aspect in the Council’s position earlier this year, the Provisional Agreement includes new provisions addressing the use of General Purpose AI (i.e., AI that can be used for many different purposes; “GPAI“), and the integration of GPAI into high-risk AI systems.
- Specific rules have been also agreed for foundation models, which will have to comply with specific transparency obligations. ‘High impact’foundation models, which are the models trained with large amount of data and with advanced complexity, capabilities, and performance well above the average (e.g., OpenAI’s GPT, Google’s Gemini), will have to follow a stricter regime, which can disseminate systemic risks along the value chain.
- A new AI Office within the EU Commission will oversee advanced AI models such as GPAI and foundation models. The AI Office will contribute to fostering standards and testing practices, and enforce the common rules in all EU member states. A scientific panel of independent experts will be advising the AI Office.
- An AI Board, comprising of member states’ representatives, will act as a coordination platform and an advisory body on the implementation of the regulation. An advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
- Fines are set as a percentage of global annual turnover or a fixed amount. Depending on the nature of the violation, fines can reach up to €35 million or 7% of global annual turnover, with lower caps for SMEs and startups.
Transparency and Protection of Fundamental Rights:
- The Provisional Agreement provides for a requirement to conduct a fundamental rights impact assessment before deploying high-risk AI systems.
- Increased transparency will be required for the use of high-risk systems, especially by public entities which will be obliged to register in an EU databasein certain cases.
- Companies using emotion recognition systems will have to inform natural end users when they are being exposed to such a system.
- The Provisional Agreement modifies certain provisions to foster innovation, including AI regulatory sandboxes and testing in real-world conditions, under specific conditions and safeguards. Smaller companies will enjoy some limited and clearly specified derogations.
An updated final draft of the AI Act reflecting the Provisional Agreement is expected to be submitted in the upcoming weeks. Then, the AI Act would be subject to finalization and formal adoption by both EU co-legislators.
The AI Act is set to apply 2 years after its entry into force, with exceptions for specific provisions. Considering the latest developments, we expect the AI Act to enter into force in the next few months and become applicable by the first half of 2026.
We recommend that once the final draft AI Act, which is expected to pass based on the Provisional Agreement, is published – businesses and other relevant stakeholders begin preparing for compliance by evaluating their AI systems and processes.
Feel free to approach us with any further questions regarding the legal considerations and practical implications of the existing and upcoming regulatory frameworks that apply to the development and use of AI technologies. Our AI Practice experts possess a deep legal, business as well as technical understanding and hands-on experience in the AI domain, and would be glad to assist.