Few markets have grown as fast, in as short a time, as artificial intelligence (AI).
And as the technology is increasingly deployed across industries ranging from marketing, to payments, to insurance, execution speed is only becoming more important.
This, as per a Tuesday (Nov. 7) report, Amazon is working on an ambitious new large language model (LLM), which it could announce as soon as December.
Code named “Olympus,” the rumored LLM is set to be one of the largest foundation models ever trained at an alleged 2 trillion parameters — double that of the closest competitor, OpenAI’s state-of-the-art GPT-4 model, which has 1 trillion parameters.
Amazon CEO Andy Jassy has projected that generative AI will drive “tens of billions in revenues.”
But the company already sells LLMs made by both itself — including its other Greek-named Titan model — and by buzzy AI startups like Anthropic, in which Amazon took an up-to-$4-billion minority stake.
AI models are notoriously capital and resource intensive to train and build. So why is Amazon undertaking the massive investment in an all-new LLM?
The answer, observers believe, is that the tech giant wants to control its own destiny in the AI space, as well as catch up to first-mover peers like OpenAI, Microsoft, Google and even Meta, who have rapidly pushed ahead with their own AI platforms.
Read also: OpenAI Scoops Big Tech With Launch of GPT App Store
In order to compete and win with AI, the state of today’s landscape shows that companies are increasingly coming to believe they need to build their own AI models.
After all, Google has also invested heavily into Anthropic, and the Mountain View company also has its own PaLM foundation model and is widely viewed as a pioneer and leader in the AI field.
With its Olympus effort, Amazon is showing that it ultimately doesn’t want to rely on other ecosystem players for the innovations it offers to its own captive customer base — while at the same time acknowledging with its Anthropic stake that those customers might want access to cutting edge AI solutions now and not later.
But enterprise clients want to access top-performing models, and for Amazon to meet future needs securely, seamlessly and competitively, it is better for the company to offer services built from its own models than those that link to another LLM via an application programming interface (API).
The revelation of the Olympus initiative comes as OpenAI’s Developer Day on Monday (Nov. 6) lit a fire under other AI companies with the announcements of its “GPT App Store” and a new, turbo-charged GPT-4 model.
Increasingly, the opportunity areas in the AI landscape are shrinking, as white space is captured by well-capitalized incumbents, with observers believing the marketplace is growing tougher to crack due to the lack of high-quality talent and the staggering up-front costs of building and commercializing a model from scratch.
Elon Musk’s launch on Saturday (Nov. 4) of a “spicy and rebellious” LLM named “Grok” failed to garner much excitement in the large enterprise sphere, despite its alleged capabilities being near-equal to Meta’s LLaMA 2 AI model and OpenAI’s GPT-3.5.
However, German AI startup Aleph Alpha, whose technology is centered around the concept of “data sovereignty” — which emphasizes that data stored in a specific country should be subject to that country’s law — recently raised $500 million in a funding round, showing that the marketplace hasn’t entirely softened.
See more: How Harnessing AI-Generated Code Can Accelerate Digital Transformations
Amazon’s Olympus model project is being spearheaded by Rohit Prasad, former head of Amazon’s Alexa and head scientist for the company’s artificial general intelligence (AGI), per the report.
Prasad’s prior role could point to Olympus being used to ramp up Alexa’s voice AI capabilities across the company’s connected device suite.
An Amazon representative declined to comment when reached by PYMNTS.
As revealed in the PYMNTS Intelligence report “Consumer Interest in Artificial Intelligence,” consumers interact with about five AI-enabled technologies every week on average, including browsing the web, using navigation apps and reviewing online product recommendations. Nearly two-thirds of Americans want an AI copilot to help them do things like book travel.
“Computers can now behave like humans. They can articulate, they can write and can communicate just like a human can,” Beerud Sheth, CEO at Gupshup, told PYMNTS as part of the AI Effect series. “[But] enterprise use of AI has to be accurate and relevant — and it has to be goal oriented. Consumers can have fun with AI, but in a business chat or within an enterprise workflow, the numbers have to be exact, and the answer has to be right.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.