Anthropic Raises $450 Million for ‘Honest’ AI Systems

AI

Artificial intelligence (AI) firm Anthropic has raised $450 million to develop its AI assistant.

The Series C round, announced Tuesday (May 23), comes amid a wave of funding for generative AI companies, and as governments grapple with ways to regulate the industry.

Anthropic’s round was led by Spark Capital with participation from Salesforce Ventures, Sound Ventures, Zoom Ventures and Google, which had invested $300 million in the company earlier this year.

“The funding will support our continued work developing helpful, harmless, and honest AI systems—including Claude, an AI assistant that can perform a wide variety of conversational and text processing tasks,” Anthropic said in a news release.

AI companies have remained something of a bright spot in an otherwise gloomy period for startup funding, with firms in the sector taking in $1.7 billion during the first quarter of the year, according to Pitchbook.

But as noted here recently, the hype around the technology could quickly fade if AI can’t be used for long-term, practical applications across industries that showcase its value.

“Given how many enterprise operations, as well as day-to-day consumer touchpoints, have significant software components, generative AI will impact, at least in some manner, how businesses engage with their customers, and how they compete with each other, particularly in marketplaces where speed to discovery can give a firm an edge,” PYMNTS wrote.

The immediate impact of AI, that report added, is the way it has augmented knowledge-based work by reducing the time cost of various operations.

For example, OpenAI CEO Sam Altman – testifying before a U.S. Senate subcommittee last week – offered the real-life example of a small business owner with dyslexia who created an AI tool to automate the drafting of professional emails that led to several hundred thousand dollars in new business.

Altman also used that testimony to urge lawmakers to regulate his company’s technology.

“I think if this technology goes wrong, it can go quite wrong,” he said. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”

An example of how AI can go wrong was on display this week when an AI-created image of black smoke rising from the Pentagon went viral on social media, temporarily sending stocks lower and stoking fears of an attack on the Department of Defense.

“Misinformation is older than the internet, but for years the best defense has been simply relying on one’s own common sense,” PYMNTS wrote. “The ability of generative AI to craft entirely lifelike and believable content is rapidly changing that paradigm.”