Microsoft’s ‘General Artificial Intelligence’ Claims Spark Debate

Microsoft

Microsoft says its artificial intelligence (AI) can reason the way humans do.

Some AI experts say they are dubious.

That’s according to a report Tuesday (May 16) by The New York Times that examines “The Sparks of AGI,” a paper published by Microsoft in March that argues it has an AI system that demonstrates “artificial general intelligence” (AGI), which means a machine that can do anything the brain can do.

It’s a debate happening at a time when, as PYMNTS wrote earlier this week, AI capabilities have become “the future-fit infrastructure integration du jour, as generative solutions enter the marketplace and promise to transform business operations with next generation efficiencies.”

Microsoft and rival Google have been at the center of this phenomenon, with the former becoming the first Big Tech firm to release research making this sort of claim about AGI, the Times report notes.

It has led to a debate about whether the industry had made a giant leap, or if researchers had become victims of their own imaginations.

“I started off being very skeptical — and that evolved into a sense of frustration, annoyance, maybe even fear,” Microsoft lead researcher Peter Lee told the Times. “You think: Where the heck is this coming from?”

The Times report goes on to say that some AI experts think the paper was Microsoft’s attempt to make a bold claim about a technology that’s not well understood. General intelligence, they say, requires knowledge of the physical world that AI hasn’t achieved.

“The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches,” said Maarten Sap, a researcher and professor at Carnegie Mellon University. “They literally acknowledge in their paper’s introduction that their approach is subjective and informal and may not satisfy the rigorous standards of scientific evaluation.”

Meanwhile, other concerns about the use of AI persist, PYMNTS wrote recently, centered around questions about the data and information fed to the AI models, along with protections around that data’s provenance and security.

The large language models (LLMs) used in this technology are given to hallucination and returning information that is at best inaccurate and at worst misleading, which has led Microsoft to reportedly offer a privacy-focused version of OpenAI’s ChatGPT chatbot to business clients worried about regulatory compliance and data leaks.

“As enterprise integrations of the novel solution continue to spread, more care will have to be taken around its applications to ensure that its potential for revolutionary growth is grounded in an auditable and valid foundation,” PYMNTS wrote.