Amazon’s Anthropic Investment Highlights Big Tech’s Big Plans for AI

Anthropic app, webpage

Amazon is taking the safe route when it comes to generative artificial intelligence (AI).

The Seattle-based tech giant on Monday (Sept. 25) announced that it was taking a minority stake in AI startup Anthropic, a public benefit corporation whose corporate mission is to develop “helpful, harmless, and honest” AI models.

Just last week (Sep. 19), Anthropic released its “Responsible Scaling Policy,” a public commitment by the AI company to managing what it calls the “increasingly severe” risks of AI that is modeled loosely after the U.S. governments biosafety level (BSL) standards for handling of dangerous biological materials.

“We focus these commitments specifically on catastrophic risks, defined as large-scale devastation (for example, thousands of deaths or hundreds of billions of dollars in damage) that is directly caused by an AI model and wouldn’t have occurred without it,” the policy states. “AI systems are dual-use technologies, and so as they become more powerful, there is an increasing risk that they will be used to intentionally cause large-scale harm.”

Anthropic’s stated goal is not to prohibit development of innovative AI models, but instead to safely enable their use with appropriate precautions.

And Amazon isn’t the only tech giant pumping money into Anthropic’s novel mission. Google was one of the startup’s first backers and holds a 10% stake, while the venture arms of both Salesforce and Zoom participated in Anthropic’s $450 million Series C fundraising round this past spring. SK Telecom, South Korea’s largest operator, also invested $100 million into Anthropic this August with the goal of jointly developing a large language model (LLM) and AI platform for the global telecom sector.

But Amazon’s latest investment of $1.25 billion with the option for another $2.75 billion injection is the largest outlay Anthropic has received yet.

Read also: Anthropic Says the Only Way to Stop Bad AI Is With Good AI

Generative AI Is Running on Big Tech’s Billions

Incumbent tech giants like Microsoft, Amazon, Google, and others have been both partnering with and investing into today’s leading-edge AI startups in order to provide them with the computing power and cash necessary to power and train their foundational models.

After all, the AI business is an increasingly pricey and competitive one, with some analysts estimating that Microsoft’s Bing AI chatbot, which is powered by an OpenAI LLM, needs at least $4 billion of infrastructure just to do its job.

Google’s own researchers have estimated that their firm’s AI operations take roughly the same energy usage each year as the city of Atlanta.

As part of Amazon’s multi-billion-dollar investment into Anthropic, the AI startup will use Trainium and Inferential chips from Amazon Web Services (AWS) to “build, train, and deploy its future foundation models, benefitting from the price, performance, scale and security of AWS.”

AWS will also become Anthropic’s primary cloud provider, with the startup running a majority of its workloads on AWS servers while using its data centers for storage.

Amazon rival Google had previously claimed that “70% of generative AI unicorns are Google Cloud customers.”

See also: This Week in AI: Interactive Evolutions, Enterprise Wins, Biometric Payments

Big Tech’s Big Plans

While substantial, Amazon’s promised $4 billion for Anthropic pales in comparison to the more than $10 billion Microsoft has invested in AI startup OpenAI, the maker of the popular ChatGPT and Dall-E AI products.

And both the pace of innovation and the pace of integration are picking up substantial speed and momentum within the AI ecosystem.

Last Tuesday (Sept. 19), Google announced a sweeping set of upgrades to its chatbotBard, that are meant to further integrate the AI tool into end-users’ lives by providing more sophisticated and streamlined capabilities.

This Tuesday (Sept. 26), Microsoft will start to roll out over 150 new AI-powered features as part of its “everyday AI companion, Copilot.”

Microsoft’s Copilot will run on an OpenAI-provided foundational LLM, and comes after the company’s AI research team accidentally exposed 38 terabytes of private data while publishing open-source AI training data to cloud-based code hosting platform GitHub.

Google’s AI products run on the tech giant’s own proprietary foundational model, PaLM 2. As PYMNTS has written, today’s AI platforms typically differ around their transparency, quality of training data, and ease of integration — as well as their adherence to regulations.

For example, Anthropic’s foundational model which Amazon is planning to leverage is explicitly designed (using constitutional AI) to be as safe as possible in the absence of any overarching regulation, while OpenAI’s models are trained using reinforcement learning from human feedback (RLHF).

Google’s AI model, for its part, is able to draw from real-time information on the web and is able to provide users with the most up-to-date information during engagements.