FTC Chair: Immediate AI Regulation Needed to Safely Develop Industry

Today’s ongoing digital revolution is continually transferring frontier technologies into commercial applications.

If a task or process feels routine today, it will likely be the target of an algorithm tomorrow.

Nowhere is this more true than in the field of generative artificial intelligence (AI), which has already led to far-ranging disruptions across industries, and caused businesses like IBM to pause or slow hiring for up to 26,000 back-office roles that the company believes AI could someday perform.

Lina Khan, chair of the Federal Trade Commission (FTC) penned an op-ed in The New York Times (NYT) Wednesday (May 3) calling for AI to be regulated.

“Can [the U.S.] continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices,” Khan wrote.

Groundbreaking innovations frequently require proper and effective regulation to ensure that their disruptions don’t go unchecked and wreak havoc.

While the full extent of generative AI’s potential remains to be seen, there is little doubt among industry observers and participants that it will be highly disruptive, even dangerous.

“As companies race to deploy and monetize A.I., the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices,” Khan said.

Read more: Generative AI Tools Center of New Regulation-Innovation Tug of War

Responsible AI Requires Algorithmic Rigor

Because AI algorithms, despite their “intelligent” moniker, have no inherent ability to understand consequences beyond immediate objectives, their responsible use requires transparency and accountability around both model inputs, or training data, and outputs.

As PYMNTS reported, the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission and the U.S. Equal Employment Opportunity Commission (EEOC) last Tuesday (April 25) released a joint statement underscoring that any decisions made by AI tools must follow U.S. laws.

“Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices,” read the joint release.

In her NYT opinion piece, the FTC commissioner highlighted how the rise of today’s current crop of tech giants, including Meta, Google, Alphabet, Amazon and others “began as a revolutionary set of technologies [but] ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.”

“As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” Khan added.

See also: Schumer Unveils Regulatory Framework for American AI Leadership

Benefits of Regulation Should Be Greater Than Costs

Technology has historically moved faster than the regulations it is subject to, and PYMNTS has previously covered how the sign of a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.

Given the rapid pace of their advancement, AI tools will increasingly figure centrally across nearly every element of daily life, as well as impact most facets of business. As ongoing digitization transforms the world’s key experiential touch points into ever-more data-rich environments, this situation will only compound.

That’s why lawmakers in Europe want to give regulators more authority over AI companies, and even Microsoft’s chief economist believes the technology has its dangers and illegal use is a matter of “when,” not “if.”

In a further blow for unfettered corporate use of the technology, AI pioneer Geoffrey Hinton resigned from Google on Monday (May 1) in order to “speak freely” about potential risks that widespread integration of AI could “pose to humanity.”

See also: AI Regulations Need to Target Data Provenance and Protect Privacy

“When enforcing the law’s prohibition on deceptive practices, we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them,” the FTC’s Khan wrote.

So what are the main threats that have even former sector pioneers worried enough to turn-face on their life’s work?

The primary concerns center around the ability of AI models to create fake news and impersonate real individuals, as well as their potential to amplify the biases or inaccuracies present in the data they were trained on, both creating and perpetrating inequitable scenarios while also providing bad actors with ample ammo for malicious cyberattacks at an entirely new scale.

In a way, it all boils down to data.

“The algorithm is only as good as the data that it’s trained on,” Erik Duhaime, co-founder and CEO of data annotation provider Centaur Labs, told PYMNTS earlier this week.

PYMNTS has written in the past when discussing the perils and potential of AI, that by enacting guardrails around the provenance of data used in large language models (LLMs) and other training models, making it obvious when an AI model is generating synthetic content, including text, images and even voice applications and flagging its source, governments and regulators can protect consumer privacy without hampering private sector innovation and growth.

Times of technical disruption in the past have created great opportunity for all, including spurring the software industries of today — and it is important not to miss the forest for the trees. Because when, or if, the trees are all cut down, the forest might, in fact, end up being missed.