US FTC Could Find NIST an Ally to Push Its AI Agenda

AI

The Federal Trade Commission (FTC) is the federal agency leading the regulatory efforts in the U.S. on artificial intelligence (AI). Since November 2021 when it issued limited guidance in AI and machine learning and later through enforcement actions in 2022, the FTC has made clear that it is poised to tackle algorithmic discrimination biases until, at least, a new rule is enacted. 

But now, the FTC may find additional support in its quest to provide guidance in the use of artificial intelligence from the National Institute of Standards and Technology (NIST). The NIST is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence. The NIST Artificial Intelligence Risk Management Framework (AI RMF) is aimed to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services and systems.  

While the NIST is a non-regulatory federal agency and it doesn’t enact new rules, its opinions and research carry on weight for other agencies that have rulemaking powers or even for lawmakers that may introduce new legislation. The NIST published the AI RMF in March, and it is seeking comments on the draft until April 29, before publishing the final version of the framework. 

The AI RMF aims to foster the development of AI addressing accuracy, interpretability, privacy, safety and mitigation of unintended and/or harmful bias. 

The draft framework is not intended to be a checklist nor a compliance mechanism to be used in isolation. It should be integrated within the organization and incorporated into enterprise risks management.  

While the AI RMF is a rather general list of, non-binding, attributes that are desirable in an AI system, the Federal Trade Commission may use this to push its agenda to fight algorithmic biases in AI. 

The report dedicates one section to “Managing Bias” explaining the three categories of bias in AI, namely, systemic, computational and human and how AI systems should consider the three of them. This report, and other similar reports published by NIST also in March, provide recommendations on how to deal with this problem and the FTC could use it in future rulemaking. 

Read more: FTC Mulls New Artificial Intelligence Regulation to Protect Consumers

In December 2021, FTC Chair Lina Khan, in a letter to Senator Richard Blumenthal (D-Conn.), outlined her goals to “protect Americans from unfair or deceptive practices online” and in particular, Khan said that the FTC is considering rulemaking to address “lax security practices, data privacy abuses and algorithmic decision-making that may result in unlawful discrimination.” 

In addition to the rulemaking authority, the FTC has used its enforcement powers to tackle concerns related to the bad use of algorithms. For instance, on March 3, the FTC ordered WW International and Kurbo to destroy all personal information collected from children under 13, as well as any algorithm derived from the data, and pay a $1.5 million penalty. This new remedy, to destroy the algorithm, was evidence of how far the regulator can go to in this space. 

See also: FTC Chair Wants to Step up Privacy Protection With New Rules

Another aspect of AI is privacy, and the FTC also has on its agenda to propose new rules to fill the void left by the lack of a federal privacy law. But for Khan to propose new rules and these to move forward, she needs first a Democratic majority at the FTC, which she doesn’t have yet. Although this could change as early as this week. 

According to a tweet from Senate Majority Leader Chuck Schumer on April 25, Alvaro Bedoya could be confirmed by the Senate this week. Mr. Bedoya has experience in privacy and his confirmation will mean a third democratic seat at the FTC.