Large Language Models Aren’t People. Let’s Stop Testing Them As If They Were

By: Will Douglas Heaven (MIT Technology Review)
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text – a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it – the kind of thing you’d find in an IQ test. “I was really shocked by its ability to selve these problems” he says. “It completely upended everything I wuold have predicted.”
Webb is a psychologist at the University of California, Los Angeles, qho studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on…
Featured News
Beijing Court Upholds Copyright in Landmark Decision on AI-Generated Images
Nov 30, 2023 by
CPI
Price-Fixing Scandal Rocks European Construction Giants in US Court
Nov 30, 2023 by
CPI
Google Ad Chief Jerry Dischler Steps Down Amid Antitrust Scrutiny
Nov 30, 2023 by
CPI
Meta’s Ad-Free Subscription Service Faces EU Legal Challenge
Nov 30, 2023 by
CPI
UK Court Empowers Antitrust Watchdog to Probe Apple’s Dominance
Nov 30, 2023 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Horizontal Competition: Mergers, Innovation & New Guidelines
Nov 30, 2023 by
CPI
Innovation in Merger Control
Nov 30, 2023 by
CPI
Making Sense of EU Merger Control: The Need for Limiting Principles
Nov 30, 2023 by
CPI
Sustainability Agreements in the EU: New Paths to Competition Law Compliance
Nov 30, 2023 by
CPI
Merger Control and Sustainability: A New Dawn or Nothing New Under the Sun?
Nov 30, 2023 by
CPI