Logo

New Regulatory Guidance in the UK on AI and Data Protection

Media Centre

New Regulatory Guidance in the UK on AI and Data Protection

2 August 2020

02/08/2020
Technology & Regulation in the Spotlight

Following two years of research and consultation, the UK’s Information Commissioner’s Office (“ICO“) has published regulatory guidance on artificial intelligence (“AI”) and data protection. This document complements other ICO resources, including its report on Big Data, AI, and Machine Learning (“ML“) and the recently published guidelines on explaining decisions made with AI.

While the guidance is not a statutory code, it seeks to provide a practical framework which advises on best practices regarding data protection compliance when designing or implementing AI systems, as viewed by the ICO. Although the guidance is focused on data protection challenges as they may be presented in connection with ML-based AI, it also acknowledges other challenges that may rise from different kinds of AI.

The guidance focuses on four key areas: the first part covers accountability and governance; the second part addresses fair, lawful and transparent processing; the third part addresses data minimization and security; and the fourth part covers compliance with data subjects’ rights.

  • Accountability and governance: According to the guidance, AI increases the importance of embedding data protection by design and by default to all processes. In addition, decisions should be documented in order to allow companies to demonstrate that they have assessed and addressed the potential risks and acted to mitigate them. The documentation should include, inter alia, explanation on the balance between various conflicting interests and values (e.g. explainability and commercial secrecy). These requirements are relevant also in procurement of AI systems from third parties. Data protection impact assessments are required for any AI system that includes processing of personal data. Due to the complexity of AI systems and the fact that different organizations might be involved in them, the guidance also emphasizes the need to clearly determine the controller-processor relationships.

 

  • Fair, lawful and transparent processing: As AI systems involve processing of personal data in different ways and for various purposes, each processing operation and its lawful basis must be examined separately. In addition, for compliance with the fairness principle sufficient statistical accuracy must be sought in systems that make inferences about people. Companies should also take measures to mitigate discrimination risks and constantly test the system’s performance. Such actions may require processing of special categories of data, in which case an appropriate lawful basis must be ensured.

 

  • Data minimization and security: The guidance warns that AI systems may adversely affect known security risks. For example, loss or misuse of data could occur due to the large amounts of personal data that are often involved. In addition, information on certain privacy attacks which may take advantage over AI systems’ inherent vulnerabilities is also provided. The guidance advises companies to implement various security measures such as subscribing to security advisories to be notified of vulnerabilities. In addition, while AI systems may raise difficulties in this context, the guidance emphasizes that the principle of data minimization applies and must be complied with. Processing must be adequate, relevant and limited to what is necessary. A few techniques are offered by ICO in this regard, including removal of features that are not relevant for the purpose from a training data set, or making inferences locally on the user’s own device.

 

  • Compliance with data subjects’ rights: The guidance addresses particular challenges that AI systems may raise in the context of rights relating to personal data. The ICO highlights that companies must not regard data subjects’ requests in this context as manifestly unfounded or excessive just because it may be harder to fulfil them in such systems. In addition, training data may be considered as personal data and applicable for these requests in cases it can be used to ‘single-out’ the data subject it relates to. Regarding the right of rectification, the guidance clarifies that predictions that are intended as prediction scores and not statement of fact will not be held inaccurate, and assuming the personal data is accurate then this right will not apply to them. Similarly, personal data resulting from further analysis of provided data is not subject to the right of portability, and neither is personal data that has been significantly transformed in the process. The guidance also emphasizes that steps must be taken to fulfil rights related to automated decision-making. These measures may include an effective user-interface design to support human reviews and appropriate training and support in this regard.

 

Aligning AI based products and services with the rapidly developing regulatory frameworks is in the focus of various governments and regulators. In that context, see for example our updates on the recently published guidelines of the Federal Trade Commission, the White Paper of the European Commission and the proposed regulatory principles that were published by the White House.

Please feel free to approach us with any further questions regarding the legal considerations and practical implications of the new regulatory frameworks that apply to using AI technologies.

Kind regards,

Ariel Yosefi, Partner

Co-Head | Technology & Regulation Department

Herzog Fox & Neeman

 

 

 

Search by +