Cost of Proposed US AI Bill May Outweigh Its Benefits

AI

Senator Ron Wyden (D-Ore.), with Senator Cory Booker (D-N.J.) and Representative Yvette Clarke (D-N.Y.), introduced in early February the Algorithmic Accountability Act of 2022. This bill aims to bring transparency and oversight of software, algorithms and other automated systems that are used to make automated decisions.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalized communities,” said Sen. Booker.

The bill requires companies to conduct impact assessments for bias, effectiveness and other factors, when using automated decision systems to make critical decisions. The bill also gives the Federal Trade Commission (FTC) the authority to require the companies to comply with this bill and to create a public repository of these automated systems.

This bill would apply only to companies with more than $50 million in average annual gross receipts that make automated decisions concerning more than a million consumers.

The FTC seems to be the right choice to enforce this bill as a violation of this bill or any regulation adopted by the FTC could be considered an “unfair or deceptive act or practices” which fall under section 18 of the Federal Trade Commission Act. Furthermore, the FTC is already looking at the potential harmful effects that biased and unexplainable algorithms decisions have on consumers. The FTC is also considering enacting new regulations to ban certain artificial intelligence (AI) practices and to offer more guidance to companies on the use of AI and automated decision systems.

Read more: FTC Mulls New Artificial Intelligence Regulation to Protect Consumers

The U.S. doesn´t have a specific law regulating AI, and the FTC seems to be the only agency trying to curb algorithm discrimination and privacy abuses. These senators already introduced a similar bill in 2019 to regulate AI but the initiative didn’t gain enough traction. It is not clear if the updated Algorithmic Accountability Act would find more support now, despite adding more information on how to conduct the impact assessments and what type of algorithms are included in the bill.

“The 2022 legislation shares the goals of the earlier bill, but includes numerous technical improvements, including clarifying what types of algorithms and companies are covered, ensuring assessments put consumer impacts at the forefront, and providing more details about how reports should be structured,” said Wyden, Booker and Clarke in a statement.

The proposed bill is not particularly intrusive on the way companies operate and use their Automated Decision Systems (ADS), but it adds a number of requirements regarding reporting and disclosure whose benefits may not clearly outweigh its costs. For instance, companies will be required to implement and share with the FTC an impact assessment on every ADS which would include the description of the ADS, documentation on the data or other input used for the ADS, evaluation of the privacy risks and privacy-enhancing measures, evaluation of the performance, rights of consumers to contest, correct or appeal a decision by the ADS and the likely negative impact on consumers of an ADS.

The FTC would use this information to publish an annual report with trends and lessons from the impact assessments, in addition to the public repository where people could find anonymized information on ADS.

While the bill is an important first step in AI regulation to identify algorithmic bias and to make sure companies are more transparent in the way they use ADS, it may add significant regulatory costs to companies and yet, it may not terminate the most damaging practices, since the FTC would still need to investigate any potential violation of the FTC Act.

In Europe, policymakers are also debating the Artificial Intelligence Act, but took a different approach, with a broader scope in terms of potential practices that can be harmful, but limited to a relatively small number of companies providing “high risk” AI systems.

Read also: EU Parliament Committee Urges Member States to Design a Roadmap for AI

 

Sign up here for daily updates on the legal, policy and regulatory issues shaping the future of the connected economy.