FTC Investigates OpenAI Over Role in Spreading AI-Generated Misinformation

FTC and OpenAI

Artificial intelligence (AI) is becoming a non-artificial threat.

That’s because while the apocalyptic doomerism surrounding the technology’s threat to humanity as a whole may be overblown, there still exist right-now risks to individuals as it relates to the novel tech’s ability to generate deepfakes and spread nefarious misinformation — whether by design or inadvertently.

This, as the Federal Trade Commission (FTC) is now looking into whether OpenAI, the maker of market-defining chatbot ChatGPT, has harmed individuals via its AI system by publishing false information about them.

The agency has sent OpenAI a civil investigative demand focusing on areas that include the company’s practices for training AI models, handling of users’ personal information, and marketing efforts.

“Describe in detail the extent to which you have taken steps to address or mitigate risks that your large language model products could generate statements about real individuals that are false, misleading or disparaging,” one question in the letter reportedly said.

Reached by PYMNTS, an FTC spokesperson declined to comment on the report, while a representative of OpenAI didn’t immediately respond.

Read more: FTC Chair: Immediate AI Regulation Needed to Safely Develop Industry

A New Generation of Audience Manipulation Capabilities

The FTC, led by Chair Lina Khan, has broad authority to police unfair and deceptive business practices, and Khan previously said that she would “look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them” when enforcing FTC prohibitions on deceptive practices as they relate to the AI landscape.

The rise of generative AI has reshaped the information ecosystem, opening the door to more threats than ever before. Misinformation is older than the internet, of course, but at least until now the best defense has generally been relying on one’s own innate common sense.

The ability of generative AI to craft entirely lifelike and believable content is rapidly transforming that paradigm — and regulators in charge of consumer safety are starting to prep their battle plans.

“I think if this technology goes wrong, it can go quite wrong,” OpenAI CEO Sam Altman said during a U.S. Senate Hearing earlier this year. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”

Until regulation does appear, “Pandora’s box has been opened. AI is really powerful … The tail wags the dog now: Things happen online first, and then trickle down to real life,” Wasim Khaled, CEO and co-founder of intelligence platform Blackbird.AI, told PYMNTS at the end of last month.

Misinformation and disinformation … can be a killer,” Khaled added.

“There’s a beautiful upside [to generative AI],” Gerhard Oosthuizen, CTO of Entersekt, told PYMNTS in February. “Unfortunately, there is also a darker side. People are already using ChatGPT and generative AI to write phishing emails, to create fake personas and synthetic IDs.”

Read also: How AI Regulation Could Shape Three Digital Empires

Innovation Shouldn’t Salt the Government’s Snail’s Pace

AI models within the U.S. are currently scaling and commercializing absent any dedicated regulation or policy guardrails.

Industry insiders have drawn parallels between the future purpose of AI regulation to both a car’s airbags and brakes; as well as to the role of a restaurant health inspector in previous discussions with PYMNTS, but while platforms continue generating content sans any rules of the road, the stakes around misinformation are growing as AI gets better at its fabrications.

OpenAI itself is attempting to curry favor with regulators across both sides of the Atlantic, favoring a more centralized and specialized approach in the U.S., while at the same time pushing back against certain data-privacy regulations in the EU the firm views as needlessly restrictive, even threatening briefly to leave the bloc.

In a May blog post, OpenAI President Greg Brockman, CEO Sam Altman and Chief Scientist Ilya Sutskever together suggested that the leading development efforts in AI be coordinated to limit the rate of growth per year in AI capability, that an international authority be formed to monitor AI development efforts and restrict those above a certain capability, and that technical capability be developed to make superintelligence safe.

“If you make it difficult for models to be trained in the EU versus the U.S., well, where will the technology gravitate? Where it can grow the best — just like water, it will flow to wherever is most easily accessible,” Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS.

And as it becomes increasingly difficult for everyday consumers to spot misinformation in an already crowded and real-time content landscape, the clock is already ticking for regulatory agencies.