Media Centre

NIST Outlines a Set of Principles for Explainable AI

24 August 2020

24/08/2020
Technology & Regulation in the Spotlight

The National Institute of Standards and Technology (“NIST“) has recently published a draft guidance on explainable artificial intelligence (“AI“).

Based on consideration of AI systems’ interaction with human recipients, the guidance highlights four fundamental principles of explainable AI: explanation, meaningful, accuracy and knowledge limits.

  • Explanation: AI systems must be capable of providing evidence, support or reasoning for their output. This principle does not define the quality of these explanations as the meaningful and accuracy principles provide the framework for the quality-evaluation.

 

  • Meaningful: The fulfillment of this principle requires that the explanation must be understandable by its recipients and meaningful to them. Because there are different groups of users, and what is considered meaningful varies by context and across people, the explanation must be tailored to meet the various characteristics and needs of each group. This principle also acknowledges that different users may perceive the same explanation differently due to different variables such as their prior experience and knowledge, and these human factors must be accounted for by the AI systems and their explanations.

 

  • Accuracy: The explanation should accurately reflect the system’s processes. Regardless of the systems accuracy in terms of its decisions, their corresponding explanation might not accurately describe the background that lead to them. Like the meaningful principle, the accuracy principle also allows for differentiation in the required accuracy standards, based on the different grounds and individuals. The details of a given explanation must be balanced against their influence on its meaningfulness to certain audiences. For example, a weather alert must be meaningful to the public and may lack an accurate explanation on how the system arrived at its output. An AI system may be perceived as more explainable depending on its ability to generate a few types of explanations.

 

  • Knowledge Limits: According to this principle, AI systems should identify cases that they were not designed or approved to operate and cases where their answers may not be reliable. In such cases, the systems should express their limits refrain from providing a decision. This safeguard, in turn, may increase the level of trust that is given to a system, by preventing unjust, dangerous or misleading outputs and decisions. Knowledge limits may be achieved in two manners: recognition of the input as something that is outside of the system’s domain, and when the output of the system does not meet a defined confidence threshold.

 

This report is a part of NIST’s broader effort to assist in development of trustworthy AI, and is open for public feedback until 15 October 2020.

Aligning AI based products and services with the rapidly developing regulatory frameworks is in the focus of various governments and regulators. In that context, see for example our updates on the recently published regulatory guidance of UK’s Information Commissioner’s Office, the guidelines by the Federal Trade Commission, and the White Paper of the European Commission.

 

Please feel free to approach us with any further questions regarding the legal considerations and practical implications of the new regulatory frameworks that apply to using AI technologies.

Kind regards,

Ariel Yosefi, Partner

Co-Head | Technology & Regulation Department

Herzog Fox & Neeman

 

Search by +