Logo

The European Commission Published Draft Regulation on Artificial Intelligence

Media Centre

The European Commission Published Draft Regulation on Artificial Intelligence

25 April 2021

Technology & eCommerce Regulation in the Spotlight

The European Commission has released a Proposal for a Regulation on a European approach for Artificial Intelligence (the “Proposal“), aiming to “improve the functioning of the internal market by creating the conditions for an ecosystem of trust putting into service and use of artificial intelligence in the Union”. The proposed regulation, which shall have an extraterritorial applicability similar to the General Data Protection Regulation (“GDPR“), will impose different obligations on providers and users of Artificial Intelligence (“AI“) systems.

Following the regulatory framework outlined in the White Paper published by the EU Commission in February 2020 (see our previous update), the Proposal is the first attempt to provide a comprehensive regulatory framework for AI, seeking to define AI systems in their broadest term. The definition includes any and all software that is developed with machine learning approaches; logic and knowledge based approaches; or statistical approaches, which, “for a given set of human-defined objectives” can “generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments.”

According to the Proposal, AI systems that are considered a clear threat to the safety, livelihood and rights of people (e.g. general social scoring, systems designed to manipulate human behavior etc.) will be prohibited. Other practices categorized as ‘high-risk’, based on their intended purpose and potential harms (e.g. employee management, law enforcement etc.), will have to uphold to the standard detailed in the Proposal. With respect to other lower risk uses, which represent the majority of applications, there will be no additional regulatory constraints beyond existing ones.

One of the main objectives of this Proposal is to ensure that people can trust that AI technologies are used in an appropriate way, therefore the Commission adopted a “cradle-to-grave” approach: AI systems will be subject to scrutiny before they are placed in the market and throughout their lifecycle, placing a focal emphasis on the aspects of risks assessment and transparency. Among other requirements, AI systems classified as ‘high-risk’ must be preemptively assessed in order to ensure they fully comply with all regulatory requirements, before being distributed. Moreover, providers of such systems must establish a risk management system and carry out post-market assessments, as well as ensuring adequate level of security and accuracy, including by reporting “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority.

In order to assist startups and small-medium enterprises with the development of AI systems, the Proposal also encourages Member States to establish support measures (such as regulatory sandbox), so providers can pre-assess the compliance of their systems with the regulations. Yet, non-compliance with the regulation could lead to fines of up to €20M or 4% of annual turnover, whichever is higher, similarly to the GDPR.

In the same context and with similar timing, the US Federal Trade Commission (FTC) published its recommendations for organizations using AI in the US, based on its 2020 guidance (see out update). Among other things, the FTC warrants providers using AI from using data in such way that may yield inequitable or discriminating results. Additionally, the FTC encourages companies to embrace transparency and independence and to provide truthful and non-deceptive statements regarding their use of AI.

Similarly to the EU Commission, the FTC acknowledges that despite their potential benefits, AI tools are not without risks, including operational vulnerabilities, cyber threats, heightened consumer protection issues, and privacy concerns. Companies deploying artificial intelligence systems are generally expected to operate following the “Do more good than harm” principle.

The Proposal, as well as the FTC’s recommendations, present important regulatory developments for entities developing, distributing or using AI technologies. Companies should examine their methods of use and assess their future exposures.

Feel free to contact us if you have any questions regarding these developments and their potential effects on your company’s compliance efforts.

 

Kind regards,

Ariel Yosefi, Partner 
Head of Technology & eCommerce Regulation

 

Search by +