The Council of the European Union Adopts a General Approach to the AI Act

Media Centre

The Council of the European Union Adopts a General Approach to the AI Act

2 January 2023

Following the recently increased focus of legislators and regulators globally on artificial intelligence (“AI“), the Council of the European Union (“the Council“) has adopted its position concerning the Proposal for the comprehensive European Regulation on Artificial Intelligence, also known as the “Artificial Intelligence Act” (“AI Act“).

The Council’s position proposes a wide array of amendments to the original text of the proposed AI Act, which address various matters. We hereby briefly present the key changes included within the revised text of the AI Act in accordance with the Council’s approach (hereinafter: “the compromise text“):

  1. The Scope of the Act and Law Enforcement – the compromise text clarifies that national security, defense and military purposes fall outside of the scope of the AI Act. Furthermore, it clarifies that AI Act does not apply to AI systems and outputs thereof the sole use of which is research and development, and also does not apply to obligations of persons using AI systems for non-professional purposes, with the exception of the transparency obligations. Furthermore, the Council has proposed making additional changes related to the provisions concerning the use of the AI systems for law enforcement purposes.


  1. Definition of “AI System” – the Council seeks to create a proper distinction between the AI and the more classical software systems, and proposes to narrow down the definition of an “AI system” to systems which are developed through machine learning approaches as well as logic and knowledge-based approaches. As part of such endeavor, the Council has also provided clarification on the meaning of machine learning approaches and logic and knowledge-based approaches.


  1. Prohibited AI Practices – The compromise text extends the prohibition on the use of AI for social scoring beyond the public sector – to private actors. In addition, the compromise text also expands the prohibition on the use of AI systems that exploit vulnerability of a specific group of persons to include also vulnerabilities arising from social or economic situations. Furthermore, the compromise text provides a clarification on the situations in which the use of the generally prohibited ‘real-time’ remote biometric identification systems in public accessible spaces by law enforcement authorities may be exceptionally permitted.


  1. General Purpose AI Systems – The compromise text seeks to apply certain requirements which have been applicable to high-risk AI systems under the original text also to the general purpose AI systems (namely, systems which may be used for different purposes). However, the application of such requirements, under the compromise text, will not be direct but rather will be specified in an implementing act.


  1. Classification as High-risk AI System – under the compromise text, the classification of AI systems as high-risk is intended to include also a horizontal vertical and requires taking into account also the significance of the output of the AI system with regard to the specific action or a decision. In such way, the Council intends to ensure that only AI systems that are likely to cause serious violations of fundamental rights or other significant risks are considered as high-risk. Furthermore, the Council seeks to amend the list of high-risk AI use cases. While “deep fake” detection by law enforcement authorities, crime analytics and verification of the authenticity of travel documents have been removed from the list of high-risk AI use cases under the compromise text, the use of AI systems in critical digital infrastructure as well as in life and health insurance have been added to such list. Additional changes and clarifications have been provided with regard to certain additional use cases.


  1. Requirements for High-risk AI Systems – the compromise text provides clarifications and adjustments which are intended to increase their technical feasibility, and make them less burdensome for the relevant actors to comply with (e.g., with regard to the data quality and the technical documentation for demonstrating compliance). In addition, the compromise text aims to clarify the allocation of roles and responsibilities between the relevant actors, and clarify the relationship with other applicable legislation (such as data protection or financial services regulations).


  1. Transparency – the Council has also introduced several changes the aim of which is to increase the transparency concerning the use of high-risk AI systems, including, among others, imposing on the users of emotion recognition systems an obligation to inform natural persons of their exposure to such systems.


  1. Additional Changes – the compromise text also includes substantial amendments and additions to measures in support of innovation and unsupervised real-world testing of AI systems. Furthermore, the Council also seeks to amend provisions related to penalties for infringements of the AI Act (with a particular focus on treatment of start-ups and small and medium-sized enterprises) and the AI Board. In addition, the Council has also provided with additional clarifications and simplifications of the provisions concerning conformity assessments and market surveillance, as well as has clarified that a person may file a complaint concerning non-compliance with the AI Act to the applicable market surveillance authority and expect it to be properly handled.


Once the European Parliament adopts its approach concerning the AI Act (which is expected to occur in the upcoming months), the Council may enter the negotiations with the European Parliament and seek to reach an agreement on the proposed AI Act. We are closely monitoring this and other global regulatory developments in this rapidly developing area of legal expertise, especially in light of the increased amount of public and regulatory interest and caution related to the AI technology following the public release of the ChatGPT chatbot, for example, and other generative AI use cases.

Feel free to approach us with any further questions regarding the legal considerations and practical implications of the existing and upcoming regulatory frameworks that apply to the development and use of AI technologies. Our AI Practice experts possess a deep legal, business as well as technical understanding and hands-on experience in the AI domain, and would be glad to assist.


Search by +