OpenAI Takes Aim at ‘Hallucinations’ as More Businesses Integrate AI

Humanity has always tended to over-trust the computer.

Simply put, we have a long history of misplaced, often naïve faith in next-big-thing technologies.

This should give firms pause as they race to integrate next-generation generative artificial intelligence (AI) capabilities into their products and services.

That’s because the widespread use of relatively early-stage AI will introduce new ways of making mistakes. Generative AI, unlike its cousin predictive AI, can “generate” or create new content such as text, speech, images, music, video and code — but it can also fabricate information entirely, something referred to as a “hallucination.”

This, as the Microsoft-backed OpenAI released a new research paper Wednesday (May 31) titled “Improving mathematical reasoning with process supervision,” revealing a new strategy for fighting hallucinations.

“Even state-of-the-art models still produce logical mistakes, often called hallucinations. Mitigating hallucinations is a critical step towards building aligned AGI (artificial general intelligence),” states the report.

“These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution,” the researchers added.

The firm’s new strategy entails training its AI models to reward themselves for each correct step of reasoning when they’re arriving at an answer, in a process called “process supervision,” which varies from the current process of “outcome supervision,” which as the name implies, rewards the model after (supposedly) correct conclusions.

It comes at a time when more and more companies have been adding generative AI features to their products amid increased public interest.

Read MoreGenerative AI Fabrications Are Already Spreading Misinformation

Overly Confident Generalizing From Limited Historical Data

Some experts and industry observers have already expressed doubts about how effective OpenAI’s proposed methods for combating hallucinatory tendencies will be.

OpenAI warns users against trusting its generative AI chatbot, ChatGPT, prominently displaying a disclaimer that states, “ChatGPT may produce inaccurate information about people, places, or facts.”

As PYMNTS reported, MIT researchers have released a new paper separate from OpenAI’s that also hopes to improve the reasoning and factual accuracy of large language models (LLMs).

Still, when businesses behind generative AI platforms boast about their models’ capabilities, such as passing medical and legal exams and even warn against the technology leading to an “extinction event,” the average user can be forgiven for believing that AI is smarter than they are and that the answers it provides are correct.

See AlsoIs Generative AI in 2023 as Transformational as Indoor Plumbing Was in 1920?

That’s because the content generative AI produces can be highly fallible while remaining extremely plausible — the AI models are designed to produce the most statistically probable answer, after all, yet they do so without any concept of meaning.

Hallucination is a problem that affects all chatbots and is not native to OpenAI’s ChatGPT but rather to AI more broadly as a technical solution trained on historical data (the only data available in our linear world).

AI hallucinations follow the same trend as humanity’s most classic mistakes — they overconfidently generalize from limited historical data.

In Google’s promotional video earlier this year for its own AI model, Bard, the chatbot made an untrue claim about the James Webb Space Telescope that saw the tech giant shed $100 billion in market value — a very real result.

And as PYMNTS reported more recently, two attorneys may face professional sanctions after citing cases in court that were entirely hallucinated by ChatGPT.

Read also: AI Regulations Need to Target Data Provenance and Protect Privacy

Despite Its Glitches, AI Is Here To Stay

According to “Preparing for a Generative AI World,” the May edition of the “Generative AI Tracker®,” a PYMNTS and AI-ID collaboration, there is currently no straightforward way to differentiate AI-generated content from authentic materials.

But while hallucination remains an endemic problem, it is clear that generative AI models aren’t going anywhere anytime soon and will have a massive impact across most, if not all, business sectors and industries.

That makes effective regulation of the technology of the utmost importance.

AI-ID founder and CEO Shaunt Sarkissian told PYMNTS in an interview posted Tuesday (May 30) that, from a regulatory standpoint, there is a need to create an arena in which everyone can compete but with some level of control to “ensure that the technology is not destructive.”

That’s because generative AI has the potential to create a new data layer, like when HTTP was created and gave rise to the internet beginning in the 1990s, and as with any new data layer or protocol – governance, rules, and standards must apply, explained Sarkissian.

Across the pond, the European Union’s top technology officials are already crafting rules.

As reported by PYMNTS, European lawmakers could be looking at an AI code of conduct draft “in weeks.”

EU officials have said that the U.S. and Europe together should back a voluntary code of conduct until new laws are crafted.