ChatGPT Threatens ‘Privacy and Public Safety,’ Nonprofit Says

OpenAI

A nonprofit group wants federal regulators to suspend development of OpenAI’s ChatGPT tool.

The Center for AI and Digital Policy (CAIDP) filed a complaint with the Federal Trade Commission (FTC) Thursday (March 30) asking it to investigate the artificial intelligence (AI) company and put a halt to its development of large language models for commercial purposes.

“The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability.’ OpenAI’s product GPT-4 satisfies none of these requirements,” the complaint says. “It is time for the FTC to act.”

The complaint accuses OpenAI of violating Section 5 of the FTC Act, which bars unfair and deceptive business practices, as well as the agency’s guidance for AI products, and calls the latest iteration of ChatGPT “biased, deceptive, and a risk to privacy and public safety.”

Earlier this week, CAIDP President Marc Rotenberg was one of a group of high-profile figures — along with Elon Musk and Apple Co-founder Steve Wozniak — who signed an open letter advocating for a temporary halt on the development of AI systems more powerful than GPT-4.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” reads the letter, issued by the Future of Life Institute. “Should we automate away all the jobs, including the fulfilling ones?”

The letter came within days of a report from Goldman Sachs, which found that advances in AI could eventually affect 300 million jobs worldwide, and a quarter of all jobs in Europe and the U.S., with attorneys and administrative workers at the highest risk of job loss.

According to the report, roughly two-thirds of jobs in Europe in the U.S. are exposed to some level of AI automation.

The Goldman Sachs report follows recent findings from OpenAI that projects that generative pre-trained transformer (GPT) models, and software tools built atop them, could affect up to 50% of the tasks necessary for nearly a fifth of U.S. jobs.

PYMNTS looked at the impact AI could have on the workforce in a recent interview with Matthew Tillman, chief executive at automated accounts payable solution OpenEnvoy.

“[AI] no longer just has an implication as a use case, it has an implication on their business,” Tillman told PYMNTS CEO Karen Webster.

The implication, added Tillman, is that businesses determine right away how many workers they might need to man a department in the near future, leading to a world where more CFOs begin to ask, “How can we run a department of one?”