Media Centre

New Regulatory Guidance in the UK on Explaining AI-Based Decision-Making

7 June 2020

Technology & Regulation in the Spotlight

The Information Commissioner Officer (“ICO“) and The Alan Turing Institute recently published a detailed practical guidance for organizations on explaining the processes, services and decisions delivered or assisted by Artificial Intelligence (“AI”), to the individuals affected by such decision making. The guidance was published in response to the UK government’s AI Sector Deal policy paper, and is focused on how organizations can provide individuals with better understanding of how AI systems work and decisions are made, based on the requirements of the General Data Protection Regulation (“GDPR”) and the Data Protection Act of 2018 (“DPA“). The guidance represents a regulatory “good practice” and is not a binding statutory code.

The guidance is split into three parts, each aimed at different personnel within the organization:

  • The first part is aimed at the organization’s Data Protection Officer and compliance teams, outlining key terms and concept associated with AI and the applicable laws relating to automated decision-making. The guidance distinguished between solely automated and assisted decision making. With regards to solely automated decisions, the guidance reiterates the requirements under the GDPR and requires organizations to proactively give information explaining the logic behind the decision-making process and to enable individuals to exercise their rights to access meaningful information and to object to automated decision-making. The guidance addresses industry concerns that explainablity and transparency requirements may disclose commercially sensitive materials about AI systems, and states that disclosure of in-depth information, such as code or algorithms, is not required for compliance.


  • The second part is aimed mainly at technical teams, considering how organizations can decide upon the appropriate models for explaining their AI based decision-making processes. The guidance sets out suggestions on six tasks that may help in designing explainable AI systems and deliver appropriate explanation to individuals, according to the needs of the individuals, sector, personal data involved and the nature and effect of the decision in question.


  • The third part is aimed at senior management in the organization. The guidance covers the roles, policies, procedures and documentation required to ensure organizations are able to provide meaningful explanations to individuals who are subject to AI based automated decision-making. The guidance acknowledges that many organizations will not build their own AI systems, and will rely on systems designed by third-parties. According to the guidance this shall not affect their obligation, and such organizations shall be able to explain the decision- making process of the systems they use, in order to meet their obligations as controllers under the GDPR and DPA.


Aligning AI based products and services with the rapidly developing regulatory frameworks is currently in the focus of governments and regulators. In that context, see for example our recent updates on the guidance issued by the Spanish data protection authority on the application of the GDPR to products and service that embed AI, the FTC’s guidance on AI, the White Paper of the European Commission and the proposed regulatory principles that were published by the White House. Feel free to contact us if you have any questions on the legal implications of implementing AI systems in your organization.


Kind regards,

Ariel Yosefi, Partner

Co-Head | Technology & Regulation Department

Herzog Fox & Neeman

Search by +