Anthropic Says the Only Way to Stop Bad AI Is With Good AI

Anthropic

Artificial intelligence (AI) doomerism is having a moment.

It’s less “I Robot,” and more “humanity is going extinct.”

If the industry’s negative hype men are to be believed, there’s a non-zero chance that misuse of AI could lead to the end of humanity in just a handful of years (5 to 10 at many estimates by observers).

Yes, industry insiders are lining up to say the disruptive new models are simply that powerful and really are evolving that fast.

Well-funded AI startup Anthropic launched Tuesday (July 11) the latest version of Claude 2, the firm’s answer to OpenAI’s buzzy ChatGPT chatbot and Alphabet’s Bard product.

Anthropic’s AI bot is designed to be “helpful, harmless and honest.”

The company — a public benefit corporation — was founded in 2021 by executives at OpenAI who split off to form their own AI company due to concerns that OpenAI was growing too commercial.

CEO Dario Amodei led the teams that built OpenAI’s ChatGPT-2 and ChatGPT-3, and his sister Daniela Amodei — who formerly oversaw OpenAI’s policy and safety teams — is now Anthropic’s president.

“I think if this technology goes wrong, it can go quite wrong,” OpenAI’s CEO Sam Altman previously said.

But is a good guy with a better AI chatbot the only way to stop a so-called bad guy with a harmful AI chatbot?

See also: Generative AI Is Eating the World — How to Avoid Indigestion

Kindness as a Competitive Advantage

Anthropic has raised over $1 billion in funding from investors including Google and Salesforce, although $500 million of its capital comes from failed, and allegedly criminally operated, crypto exchange FTX.

The startup, which only has around 160 employees, per The New York Times, requires access to such a large amount of capital to work with because developing AI is a notoriously expensive endeavor, requiring massive data centers boasting a degree of computing power that just a few years ago would’ve been inconceivable.

Despite its small size, Anthropic is seen as a leader within the industry and considered a rival to much larger tech giants — in part due to its executive team’s pedigree, per The New York Times report.

The startup’s most recent chatbot is built atop a large language model (LLM) just like its competitors, however it remains disconnected from the internet — unlike Alphabet’s Bard — and is only trained on data up to December 2022, The Times of India reported.

The limitations, however, are entirely by design.

Claude 2, for all intents and purposes, is no different from other chatbots available, but Anthropic alleges that its product is less likely to cause harm than those trained and commercialized by its competitors.

The reason? Claude has been trained to be nicer through reinforcement learning and is built atop a model architecture known as Constitutional AI, which gives an AI model a written list of principles and instructs it to follow those principles. Then, it tasks a second model with policing the first based on its adherence to the principles.

Businesses can access Claude 2 via an application programming interface (API), and individual users can try it out the same way they can ChatGPT, Bard and other AI products.

Read also: Enterprise AI’s Biggest Benefits Take Firms Down a Two-Way Street

Is Becoming the Safest AI Juggernaut an Oxymoron?

Anthropic’s mission of safety-first AI has helped burnish the company’s image and endear it to federal regulators, but more cynical industry observers have started to worry about what’s referred to as mission drift and suggested that AI firms are stoking public doomerism as a backdoor marketing tactic, according to The New York Times report.

After all, OpenAI was once a nonprofit founded with a mission similar to Anthropic’s.

But as companies grow, commercialization eventually becomes a necessity.

Anthropic is looking to raise up to $5 billion over the next two years to build an AI model up to 10 times as powerful as Claude 2 is today, TechCrunch reported.

So, is the firm just another AI business talking out of both sides of its mouth by sounding the alarm around AI while working intently to fuel the same AI arms race it is warning of?

Given that no one in the AI field seems too keen on simply ceasing to build the models they claim to be worried about, only time will tell — assuming we humans are still around when the proverbial clock strikes midnight.

If it ever does.