Splitit - Installment Plans Becoming a Key Part of Shoppers Toolkit - September 2023 Banner

What Superintelligent Sentience (AGI) Means for the AI Ecosystem

What Superintelligent Sentience Means for the AI Ecosystem

Last century, space was widely considered to be the final frontier.

Now, the boundaries of human consciousness and the limits of our intellectual capability are widely viewed to be the richest liminal areas to plumb.

OpenAI CEO Sam Altman is hoping to continue working with Microsoft — his company’s largest financial backer — to push forward on the next frontier of artificial intelligence (AI): building artificial general intelligence (AGI), or computer software that is as intelligent as humans.

“I think we have the best partnership in tech, excited to build AGI together,” Altman said on stage with Microsoft CEO Satya Nadella at OpenAI’s Developer Day conference Nov. 6.

“There’s a long way to go, and a lot of compute to build out between here and AGI . . . training expenses are just huge,” Altman said, adding that ChatGPT and OpenAI’s newly-launched GPT store aren’t “really our products … those are channels into our one single product, which is intelligence, magic intelligence in the sky. I think that’s what we’re about.”

“We are committed to ensuring OpenAI has the best possible systems to train the most advanced models for our mutual customers,” said Nadella on X, formerly known as Twitter.

Increasingly, the goalposts for those future-fit “best possible” systems are looking like the cognitive and behavioral capacity of an average, living individual.

Read also: What’s Next for AI? Experts Say Going More Multimodal

AGI: The Final Frontier of Computing Capability

AGI refers to AI systems that possess human-like intelligence and the ability to understand, learn and perform a wide range of tasks that require general reasoning and problem-solving abilities. AGI is still a theoretical concept and hasn’t been fully realized yet — but it is considered to be the next great advancement in the field of AI.

OpenAI’s own internal documents define AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

The company’s charter states: “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

It adds: “The timeline to AGI remains uncertain, but our charter will guide us in acting in the best interests of humanity throughout its development.”

OpenAI also notes that its own six-member board will determine “when we’ve attained AGI,” adding that “such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”

Maybe those terms will be rewritten, however, depending on how much Altman can get Microsoft to pitch in for its AGI experiments.

Still, AGI is far from a given, no matter the number of billions of dollars that are poured into its development.

In a yet-to-be-peer-reviewed paper, a team of Stanford scientists argued that any current hints of AGI capabilities in today’s technology and AI systems are just illusion, while others in the industry have criticized the hubbub around AGI development as no more than a “sci-fi marketing ploy.”

The challenge with AGI is that, by definition, the AI system must be able to perform tasks across many domains and solve new problems beyond those whose answers are already endemic to its training set.

This is a far cry from even the most advanced AI models available today, which can be defined as narrow or specialized AI, designed for specific tasks related to what they are asked to do, like providing content recommendations, playing chess, or protecting against fraud and developing transaction risk models.

See also: Amazon Is Building an LLM Twice the Size of OpenAI’s GPT-4

How Close Is AGI?

SoftBank CEO Masayoshi Son said AGI will be here and surpass human intelligence by 2030. OpenAI executives said AGI will be reached in the next 10 years and will be able to do any job a human can do. Microsoft claimed — and received pushback for claiming — that its AI systems are already showing hints of AGI.

The development of AGI will have a transformative effect on society and create opportunities and threats, particularly around regulation.

As PYMNTS CEO Karen Webster wrote at the beginning of this year, AI’s greatest potential is in creating the knowledge base needed to equip the workforce — any worker in any industry — with the tools to deliver a consistent, high-quality level of service.

“Simply put, it takes about 40 years from birth through training to create an experienced doctor, and there’s a limited number of people born with the ability and the interest to become one,” Webster wrote. “It will take far less time to impart training via a bot with much of those skills and knowledge. Once that happens, it will be scalable.”

Intelligence is a continuum, and any future AGI systems will exist on that same continuum.

The development of AGI will be a global endeavor, and collaboration among researchers, policymakers and industry stakeholders is crucial. Once AGI reaches a certain level of intelligence, its evolution and development might become difficult to predict, and it could potentially surpass human intelligence in various domains.

Establishing frameworks for international cooperation and governance is important to address challenges related to safety, ethics and the potential global impact of AGI.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.