The companies developing the newest generational artificial intelligence (AI) tools are hoping to change the world.
Some of them are even trying to change the way that businesses are governed.
That nonprofit, OpenAI, oversees an increasingly for-profit arm valued at $86 billion and is backed by Microsoft, the world’s second largest business by market capitalization.
“We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance,” Satya Nadella, chairman and CEO of Microsoft, posted on X (formerly Twitter).
We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring… https://t.co/djO6Fuz6t9
— Satya Nadella (@satyanadella) November 22, 2023
OpenAI isn’t the only AI company with a unique organizational structure. In fact, most AI companies and internal teams operate somewhat uniquely.
That’s because AI models are notoriously expensive to build and develop. Large language models (LLMs) and other vast, data-driven AI operations frequently require tens of thousands of GPUs that are running complex and resource-hungry processes 24/7 for weeks, or even months, in high-tech purpose-built data centers.
And while firms also want to be mindful of the societal-scale risks many believe AI brings with it, the reality remains that it would be impossible for a pure-play nonprofit to deploy the amount of capital necessary to train and run an AI model.
That’s why Anthropic is computing using both Google and Amazon’s resources, Meta has grouped its AI division in with the metaverse department, and Google has combined its AI groups into one team.
Since 2019, Microsoft has invested $13 billion in OpenAI. But it hasn’t even been a full calendar year since Microsoft announced the bulk of that sum, the $10 billion investment that started Big Tech’s ongoing generative AI race.
In order to compete with the AI sector’s top players, AI companies need to raise billions of dollars to pay for vast amounts of computing power and test their models.
One of the more interesting players in the space from a governance standpoint is the startup Anthropic, which is a Delaware Public Benefit Corporation, or PBC.
The AI firm was launched in 2021 by a team who formerly ran the safety and policy efforts at OpenAI, including the lead engineer on GPT-3, Tom Brown.
It wasn’t just OpenAI’s employees they took with them — they also borrowed from OpenAI’s corporate structure.
“At Anthropic, our perspective is that the capacity of corporate governance to produce socially beneficial outcomes depends strongly on non-market externalities…our new governance structure called the Long-Term Benefit Trust (LTBT)…is our attempt to fine-tune our corporate governance to address the unique challenges and long-term opportunities we believe transformative AI will present,” the company stated on its website.
“The LTBT can ensure that the organizational leadership is incentivized to carefully evaluate future models for catastrophic risks or ensure they have nation-state level security, rather than prioritizing being the first to market above all other objectives,” added Anthropic.
Anthropic did not immediately respond to PYMNTS’ request for comment.
The Trust is an independent body of five “financially disinterested members” with an authority to select and remove a portion of Anthropic’s board, which is set to grow over time.
The current Trustees are: Jason Matheny, CEO of the RAND Corporation; Kanika Bahl, CEO and president of Evidence Action; Neil Buddy Shah, CEO of the Clinton Health Access Initiative; Paul Christiano, founder of the Alignment Research Center; and Zach Robinson, interim CEO of Effective Ventures US.
The Trustees receive no equity in Anthropic that might bias them toward wanting to maximize share prices first and foremost over safety.
But things aren’t so simple as good AI and bad AI, and a socially conscious board isn’t a business model.
Anthropic raised a $580 million in a Series B round led by Sam Bankman-Fried, the former CEO of the now-defunct crypto exchange FTX. Google owns a 10% stake in Anthropic and has invested more than $2 billion in it, while Anthropic has also gotten an investment worth up to $4 billion from Amazon, which also took a minority stake in the AI firm.
SK Telecom, South Korea’s largest telecommunications operator, also invested $100 million into Anthropic this August with the goal of jointly developing a large language model (LLM) and AI platform for the global telecom sector.
Anthropic’s Claude chatbot has been trained specifically to be “more steerable” and produce predictably non-harmful results. Developing and scaling responsible AI doesn’t come cheap — and it frequently requires scarce resources that only the world’s biggest tech companies can provide.
And other players are consolidating their resources in order to better compete.
Google, which has used AI since 2001 to improve its search engine, expanded its AI focus in 2017 with the launch of the Google AI division. Earlier this year, the company merged Google AI with its DeepMind team — acquired in 2014 — to develop a multi-modal AI model able to compete with OpenAI. No other company has Google’s access to computing power, and researchers at the tech company have estimated that their own AI ambitions take up around 10% to 15% of Alphabet’s entire energy usage each year, representing roughly the same amount as the city of Atlanta.
For its part, Meta is pouring billions into its own foundational LLM under the same division that is developing the social media giant’s metaverse project. The result is an open-source AI that could actually be competitive in real life, not just the metaverse.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.