Media Centre

UK Regulator Publishes Guidance on the Use of AI

21 November 2022

Following the recently increased focus of legislators and regulators globally on the creation of regulatory frameworks for artificial intelligence (“AI“), on 8 November 2022, the UK Information Commissioner’s Office (“ICO“) has published a guidance on the appropriate and lawful use of Artificial Intelligence (“AI”) and personal data protection for organizations (“Guidance”).

The Guidance provides clarity on the key steps which should to be taken by organizations in order to improve their handling of AI and personal information, under the existing UK personal data protection law. In addition, the Guidance presents the responses provided by the ICO to the frequently asked questions (“FAQs“) concerning the intersection of AI and personal information.

 

Methods for Improving AI and Personal Information Handling

The Guidance presents eight key methods for improving the handling of AI and personal information to the standard required by the UK personal data protection law:

  1. Taking a risk-based approach while developing and deploying AI – at the initial stage, organizations should conduct an assessment of the necessity to use AI for the purpose for which it is being deployed. Furthermore, after the decision to use AI is made, the organization should carry out a data protection impact assessment (“DPIA“) and consult with groups which could potentially be affected by the use of such AI system.

 

  1. Carefully considering the ways for explaining the decisions made by the AI system to the affected individuals – the ICO has emphasized an importance of the data subjects’ rights to obtain a meaningful explanation concerning AI decisions. The Guidance provides that organizations should do the following: (a) be clear, open and honest concerning the use of personal data; (b) consider which explanation is needed depending on the context of the use of AI; (c) assess data subjects’ expectation concerning such explanation; (d) evaluate the potential impact of the decisions made by AI in order to assess how elaborate the explanation should be; (e) consider the ways in which individual rights requests will be handled.

 

  1. Collecting solely the data needed for developing the specific AI system – the ICO guides the organizations to: (a) ensure the accuracy, adequacy, relevancy and limitation of the personal data used, depending on the context; (b) consider privacy-preserving techniques which could be used, depending on the context (e.g. use of perturbation, synthetic data or federated learning).

 

  1. Addressing risks of bias and discrimination at an early stage – according to the Guidance, organizations must address such risks at an early stage, and specifically to: (a) evaluate the accuracy, representativeness, reliability, relevancy and currentness of the data with regard to the population or different sets of people to which the AI system will be applied; (b) map the consequences and effects of the decisions made by the AI system for different groups and evaluate whether these are acceptable.

 

  1. Investing time and dedicating resources to appropriate preparation of the data – the Guidance has emphasized that in any case the data must be accurate, up-to-date and relevant. At the stage of data preparation, the ICO guides the organizations to do the following: (a) set clear criteria and lines of accountability regarding labeling of data which involves special category of data or protected characteristics; (b) define the labelling criteria after consulting with members of protected groups; (c) involve multiple human labelers.

 

  1. Ensuring the security of the AI system – under the applicable data protection law, the organizations are required to implement appropriate technical and organizational measures for ensuring that a level of security is appropriate to the risk. The ICO has proposed several techniques that could assist in complying with such legal requirements in the context of AI systems: (a) conducting a security risk assessment; (b) conduct model debugging on a regular basis; (c) conduct proactive system monitoring and conduct investigation concerning any anomalies.

 

  1. Ensuring meaningfulness of human review of decisions made by AI – at the initial stage, the organization should choose whether the AI will be used for making solely automated decisions or whether it will only support a human decision-maker. Under the applicable data protection laws, data subjects can request a human review of decision made about them where such decision has legal or similarly significant effects, and the ICO provides guidelines for ensuring that such review is meaningful: (a) the reviewer should have an adequate training for interpreting and challenging outputs made by AI system; (b) the reviewer should have the authority to override the decision made by AI system; (c) additional factors which were not included as part of the initial input data should be taken into account by the reviewer.

 

  1. Working with external supplier for ensuring that the use of AI will be appropriate – the ICO assumes that when AI system is procured by the organization from a third party, the organization will likely be considered as ‘controller’, and will therefore be required to have an ability to demonstrate the compliance of such AI system with the applicable data protection legislation. The regulator hence guides the organization to do the following: (a) conduct due diligence of the AI system suppliers ahead of any procurement; (b) collaborate with the supplier to carry out a DPIA prior to deployment of the AI system; (c) set and document roles and responsibilities with regard to compliance with data protection requirements with the supplier; (d) request the supplier to demonstrate that the supplier has implemented a data protection by design approach; (e) ensure compliance of international transfers of personal data with applicable data protection laws.

 

FAQs Concerning AI and Personal Information

The FAQs presented within the Guidance concern the following matters:

  1. the need to carry out a data protection impact assessment (“DPIA“) while planning to use AI;
  2. the applicability of accuracy principle under data protection law to AI systems;
  3. steps to be taken in order to avoid bias and discrimination in the organizations’ use of AI;
  4. ways to comply with data minimization principle while developing an AI system;
  5. the interpretation of the term ‘solely automated decision with legal or similarly significant effects’ in the AI context;
  6. the legality of the use of AI;
  7. requirement for permission to analyze the people’s data with AI;
  8. disclosures to individuals; and
  9. the use of a third-party supplied AI systems.

 

We note that in parallel, in the EU, the legislation processes of the comprehensive pan-European AI Act, AI Liability Directive and AI Convention are gaining traction, with the former expected to be further advanced by the end of the year.

Feel free to approach us with any further questions regarding the legal considerations and practical implications of the existing and upcoming regulatory frameworks that apply to the development and use of AI technologies. Our AI Practice experts possess a deep legal, business and technical understanding in the AI domain, and are constantly monitoring the global regulatory developments in this rapidly developing area of legal expertise.

 

Search by +