PYMNTS MonitorEdge May 2024

iRobot Co-Founder: Generative AI May Be Overhyped

MIT robotics professor and iRobot co-founder Rodney Brooks thinks artificial intelligence (AI) is impressive.

Just not as impressive as many of its proponents have argued, Brooks told TechCrunch in an interview published Saturday (June 29).

“I’m not saying LLMs are not important, but we have to be careful how we evaluate them,” Brooks said, referring to large language models like OpenAI’s ChatGPT.

The trouble with generative AI, he added, is that it can capably perform some tasks, it can’t do everything humans can, and people tend to overestimate its abilities.

“When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that,” Brooks said. “And they’re usually very overoptimistic, and that’s because they use a model of a person’s performance on a task.”

This phenomenon was covered here last week after a report by Bloomberg News that some users are exchanging a high volume of messages with AI chatbots and in some cases attributing humanlike qualities to the chatbots.

“One of the ethical concerns is that while users may feel listened to, understood and loved, this emotional attachment can actually exacerbate their isolation,” Giada Pistilli, principal ethicist at AI startup Hugging Face, said in the report.

Aside from those ethical concerns, Brooks said that trying to assign human capabilities to AI is a mistake, because it leads people to want to use the tech for things that don’t make sense. For example, Brooks founded a warehouse robotics system called Robust.ai. Someone recently suggested an LLM for his robots. But Brooks argues this would slow things down. 

“When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it’s just going to slow things down,” he said. “We have massive data processing and massive AI optimization techniques and planning. And that’s how we get the orders completed fast.”

Meanwhile, PYMNTS recently spoke with experts about efforts to train AI to recognize humor. A number of strategies have emerged on this front, Pedro Domingos, professor emeritus of computer science at the University of Washington, told PYMNTS. 

“Fine-tuning the models on collections of jokes, cartoons, humorous essays, and books, etc., available on the web. Explaining to the models what’s funny and appropriate vs. not, and prompting them in various ways until they produce something to our liking. Training the models to produce funnier and more appropriate humor by having humans rate their output accordingly.”

However, he cautioned, “None of these are a guarantee of success, though, and humor is still one of the harder things for AI models to do successfully.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.