It’s a ’90s Browser War Redux as Musk and Meta Enter AI Race

It’s a large language model (LLM) landscape, and we’re just living in it.

There are more generative artificial intelligence (AI) platforms in the market now than months in the emergent tech’s commercial lifespan.

This, as both Meta and Elon Musk have announced the launch of their own AI platforms, tossing their hats into the innovation ring alongside peer competitors like Microsoft and OpenAI, as well as hopeful juggernaut upstarts like Anthropic and other startups.

But don’t let the crowded and increasingly competitive AI field fool you — significant capital investment, industry-leading technical expertise, and above all, intensively expensive large-scale compute infrastructure built atop rows of increasingly-scarce GPUs are all needed to establish and maintain generative AI models, much less jockey for position with some of the largest and most valuable businesses in human history.

Because, of course, newly created AI models need to then be commercialized and scaled once built.  

Read MoreBig Tech Leans on AI to Stabilize After Brutal Year

Generative AI is a powerful tool for streamlining workflows across industries, and its use cases are as varied as its billion-to-trillion parameter data inputs.

But what does all that investment and energy cost of doing AI business get the tech firms burning billions to create their own platforms?

An AI model that typically looks and acts just like all the others. Think Meta’s Threads to Twitter’s Twitter.

Ideally, their AI platform is a little bit bigger, a little bit quicker and — depending on the firm’s mission — a little bit safer or compliant.

After all, while generative AI is a powerful tool, the technology’s applications boil down to the same broad buckets: research co-pilot, writing assistant or task-focused enterprise enhancement solution.

The Patterns of Technological Evolution and Adoption

But the crucial underlying similarity is that AI represents an entirely new way of accessing, producing and engaging with information.

As Shaunt Sarkissian, founder and CEO of AI-ID, told PYMNTS, generative AI, at a high level, has the potential to create a new data layer, like when HTTP was created and gave rise to the internet beginning in the 1990s. As with any new data layer or protocol, governance, rules and standards must apply.

Accordingly, one way to view today’s many AI platforms is to frame them as a contemporary parallel to the explosion of web browsers built to help users access the internet and spur its growth in the 1990s and 2000s.

Browsers like Firefox, Chrome and Safari — or more historically appropriate, Mosaic, Netscape, Internet Explorer, Opera and more — all offered a somewhat commoditized experience, that of an on-ramp to the information economy of the time, and so had to distinguish themselves through varied user experiences, brand positioning and other features such as open-source architectures.

That’s why, as Meta, Musk and many others launch their own commercial AI models, it will be interesting to watch which competitive edge each actor looks to realize through their foundational architectures.

Already, OpenAI’s first-to-market platform ChatGPT is seeing its user numbers dip.

See AlsoHow AI Regulation Could Shape Three Digital Empires

Who Will Be the Chrome, Firefox, Safari or Internet Explorer (RIP) Of AI?

Today’s AI platforms typically differ around their transparency, quality of training data, and ease of integration — as well as their adherence to regulations.

All these variations are born from the architecture of an AI platform’s foundational model. For example, Meta’s newly announced platform is choosing an open-source architecture to stand out. At the same time, Musk’s xAI has the stated — and lofty — goal of “understanding the true nature of the universe,” which is buzzy in a different way than promising operational efficiencies in accounting and marketing.

OpenAI’s ChatGPT uses Reinforcement Learning from Human Feedback (RLHF) wherein human AI trainers provide the model with conversations in which they played both parts and is viewed by industry observers as being most efficient at generating and summarizing text requests among the current platforms as a result.

Anthropic’s platform is similar to OpenAI’s, but explicitly designed to be safer.

Alphabet’s Bard platform, as well as Microsoft’s OpenAI-powered Bing, can both draw real-time information from the internet for their responses and both can do a better job than peer platforms at answering questions with more relevant information.

At the end of the day, what makes the AI platforms different from one another is the way they are used by the audience they are designed for. And because AI learns from each interaction, those differences will only compound over time.