A PYMNTS Company

Large Language Models Aren’t People. Let’s Stop Testing Them As If They Were

 |  September 11, 2023

By: Will Douglas Heaven (MIT Technology Review)

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

     

    When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text – a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it – the kind of thing you’d find in an IQ test. “I was really shocked by its ability to selve these problems” he says. “It completely upended everything I wuold have predicted.”

    Webb is a psychologist at the University of California, Los Angeles, qho studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on…

    CONTINUE READING…