OpenAI CEO: If AI Goes Wrong, ‘It Can Go Quite Wrong’

The head of ChatGPT parent OpenAI is pleading with Congress to regulate his company’s technology.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Testifying before a Senate subcommittee Tuesday (May 16), Sam Altman likened artificial intelligence’s (AI) potential to that of the printing press, but said the technology needs proper oversight to prevent possible harm.

    “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said in widely reported testimony. “We want to work with the government to prevent that from happening.”

    Senators reportedly spent much of the hearing underscoring the threats posed by AI. For example, Sen. Richard Blumenthal, D-Conn., opened the session by playing a phony recording of his own voice, made with comments written by ChatGPT and actual audio from his speeches.

    Blumenthal argued that although ChatGPT produced an accurate reflection of his views, it could just as easily have produced “an endorsement of Ukraine’s surrendering or Vladimir Putin’s leadership” something he called “really frightening.”

    Several news outlets noted Altman’s testimony was unique in that — unlike other hearings involving tech gurus — the session wasn’t combative, with the CEO mainly agreeing with the senators that his technology needed regulation.

    Advertisement: Scroll to Continue

    As reported here Tuesday, the hearing arrived at a crucial moment, as policymakers worldwide struggle to both understand AI and put in place rules to police it.

    “The last time the United States passed meaningful regulation impacting the tech sector was in the late ’90s during Microsoft’s antitrust case,” PYMNTS wrote. “Now, the U.S. risks falling behind its global peers.”

    Lawmakers in Europe voted last week to adopt a draft of AI regulations, which included restrictions on chatbots such as ChatGPT along with a ban on the use of facial recognition in public and on predictive policing tools.

    “This vote is a milestone in regulating AI, and a clear signal from the Parliament that fundamental rights should be a cornerstone of that,” Kim van Sparrentak, a member of Holland’s Greens party, told Reuters. “AI should serve people, society, and the environment, not the other way around.”

    Also testifying Tuesday was Christina MontgomeryIBM’s vice president and chief privacy and trust officer, who cautioned against viewing the industry with the “move fast and break things,” ethos of Silicon Valley firms.

    “The era of AI cannot be another era of ‘move fast and break things,’” Montgomery testified, noting, “We don’t have to slam the brakes on innovation either.”