Media Centre

The FTC Published Guidelines on Artificial Intelligence and Algorithms

12 April 2020

12/04/2020
Technology & Regulation in the Spotlight

The Federal Trade Commission (“FTC“) has released its guidelines on the use of artificial intelligence (“AI“) and algorithms. The guidelines emphasize the potential benefits of AI on welfare and productivity, but also refers to the risks that may occur alongside it such as unfair and discriminating decisions.

In order to provide guidance that is widely applicable, comprehensive and adjusted to innovative utilizations of AI in various sectors, it is based, inter alia, on the FTC’s experience and enforcement in the areas of automated decision-making. In the guidelines, the FTC highlights five key elements that must be taken into consideration when designing AI based products and services:

  • Transparency: Consumer shouldn’t be deceived by interactions that are based on automated tools. Such interactions could include the use of chatbots, fake identities and more. Collection of sensitive data, such as audio, in order to enhance an algorithm should also be transparent and could otherwise lead to FTC action. Different kinds of information obtained by a third-party for automated decisions could require providing an “adverse action” notice, in cases where the supplier is considered a “consumer reporting agency” under the Fair Credit Reporting Act (“FCRA“);
  • Explanation: When an algorithm leads to denial of something of value (namely in terms of credit-granting), companies must be able to provide clear and specific explanation on what data was used in the model and how it was used. The ability to provide such explanations needs to be in mind whenever AI is used to make decision about consumers. Similar disclosures of relevant factors are required when consumers are assigned with risk scores or when changes in terms of a deal are based on automated tools;
  • Fairness: AI-based results must not discriminate against protected classes. It is advised that algorithms will be tested prior to any use and periodically, to make sure that discriminating results do not occur. The FTC emphasizes that in cases of suspected illegal discrimination, both the inputs and the outcomes will be inspected. Therefore, the less discriminatory alternatives should always be sought. In decisions of high value, such as the ones regulated under the FCRA, consumers should be given access to the information and an opportunity to correct it;
  • Accuracy: Providing certain types of data for others to base important decisions upon may lead to legal obligations. For example, FCRA compliance includes an obligation to implement reasonable procedures to ensure maximum possible accuracy. Such obligations also apply
    whenever data is provided to “consumer reporting agencies” for use in automated decision-making. In any case, AI models should always be based on accepted statistical principles and methodologies and periodically revalidated;
  • Accountability: Mechanisms such as independent and objective tests should be implemented. Any algorithm-operating entity should consider the following: the extent to which the data set is representative, and the data model accounts for biases, the level of accuracy of predictions based on big data, and if its reliance on big data raise ethical or fair concerns. In addition, algorithms need to be protected from unauthorized use. When selling AI to others, consider if it could be abused for other purposes than the original designation and whether any technological measures could be added to prevent it.

 

Aligning AI based products and services with the rapidly developing regulatory frameworks is currently in the focus of governments and regulators. In that context, see for example our recent updates on the White Paper of the European Commission and the proposed regulatory principles that were published by the White House.

**********************************************

Please feel free to approach us with any further questions regarding the legal considerations and practical implications of the new regulatory frameworks that apply to using AI technologies.

Kind regards,

Ariel Yosefi, Partner

Co-Head | Technology & Regulation Department

Herzog Fox & Neeman

Search by +