A PYMNTS Company

Anthropic CEO Claims AI Models Hallucinate Less Than Humans

 |  May 26, 2025

By: Maxwell Zeff (TechCrunch)

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In this blog post, author Maxwell Zeff (TechCrunch) looks at Anthropic CEO Dario Amodei’s assertion that current AI models may hallucinate — or fabricate information — less frequently than humans. Speaking at Anthropic’s first developer event, Code with Claude, in San Francisco, Amodei emphasized that hallucinations should not be seen as a fundamental barrier to achieving artificial general intelligence (AGI). While acknowledging that AI sometimes makes surprising mistakes, he argued that its error rate could be lower than that of humans, depending on how one measures it.

    Amodei, who is known for his optimistic projections about AGI, reiterated his belief that AGI could arrive as soon as 2026. He claimed that progress is consistent and unimpeded, rejecting the idea that hallucinations or other flaws are major roadblocks. This position stands in contrast to others in the field, such as Google DeepMind CEO Demis Hassabis, who recently pointed to hallucinations and factual errors as significant limitations. Zeff highlights a recent incident involving Anthropic’s own AI, Claude, which fabricated incorrect citations in a legal document, illustrating how the issue can have real-world consequences.

    Zeff notes that verifying Amodei’s claims is challenging, since most hallucination benchmarks evaluate AI models against one another rather than comparing them to humans. While tools like web search integration and improvements in model design have helped reduce hallucination rates in some systems — notably OpenAI’s GPT-4.5 — newer models like OpenAI’s o3 and o4-mini have actually seen hallucination rates increase. Amodei drew a comparison between AI errors and the frequent mistakes made by humans in various fields, though he did concede that the confident tone AI models use when presenting false information could be problematic…

    CONTINUE READING…