April 22, 2024 - This Thirty-Year AI Expert Challenges the Notion That AI Can Think

OpenAI and Microsoft Plan $100 Billion AI ‘Stargate’

OpenAI and Microsoft are reportedly working on a $100 billion data center project.

The center, the subject of a Friday (March 29) report by The Information, would involve an artificial intelligence (AI) supercomputer dubbed “Stargate” that is scheduled to launch in 2028.

According to the report — which cities sources involved in discussions of the project — Microsoft would likely finance the effort, expected to dwarf the cost of even the largest data centers. The Stargate would be the largest in a series of supercomputers the two tech firms hope to build.

The report said that the development of Stargate will depend largely on OpenAI’s ability to roll out the next major upgrade, due out sometime early next year.

PYMNTS has contacted OpenAI for comment but has not yet gotten a reply. A spokeperson for Microsoft offered this statement:

“Microsoft has demonstrated its ability to build pioneering AI infrastructure used to train and deploy the world’s leading AI models. We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability” 

Last week, PYMNTS examined the “battle for generative AI” that kicked off when OpenAI released its ChatGPT model. Among the possible competitors for the crown is Google, John Licato, an assistant professor of computer science and engineering at the University of South Florida, told PYMNTS in an interview.

He said this was down to the company’s “institutional expertise” and “access to compute power and data,” noting that Google’s Gemini models, particularly Gemini 1.5 Pro, offer a context window of up to a million tokens, allowing for longer contexts compared to GPT-4’s 128,000 token limit.

Licato added that Google also holds vast experience with transformers — the technology at the heart of ChatGPT — alongside access to data that few entities can rival. Other strong contenders, he said, include Meta and Anthropic.

“At this point, perhaps the most significant factor is access to a tremendous amount of computing power,” Licato said. “Companies like Google and OpenAI have millions (perhaps billions) of dollars of GPU processors, as well as more advanced computing technologies like TPUs (tensor processing units).”

Elsewhere in the AI space, PYMNTS last week examined Openstream.ai’s newly patented software aims to help businesses interact via its enterprise virtual assistant (EVA) platform, which creates AI avatars, virtual assistants and voice agents that can engage in human-like conversations without the need for back-end complexity, scripts, or the risk of hallucinations.

“Our unscripted approach to dialogue management is different from other vendors in that we do not rely on a designed dialogue script,” Raj Tumuluri, the CEO and founder of Openstream, told PYMNTS in an interview. “By combining the ability to understand these actions, conditions and entities in dialogue (that’s the neuro part using LLMs) with the ability to reason and plan over these (that’s the symbolic part), we get extremely powerful capabilities.”