“Techno-optimist.” “E/acc.” For those following the recent OpenAI drama, you’ve probably seen these words.
They adorn the X —formerly known as Twitter — bios of venture capitalists (VCs) and tech entrepreneurs, particularly those embedded and invested in the artificial intelligence (AI) space, and they are increasingly showing up in mainstream reporting.
But what do they mean?
At a high-level, they signal that someone believes advances in technology to be good for humanity. Techno-optimist, after all, is in the name. E/acc is a play on the “EA” (effective altruism) movement, and a shorthand for “effective accelerationism.”
The EA movement, long adhered to within the frontiers of tech and other industries, was made famous to the general public by disgraced criminal fraudster Sam Bankman-Fried.
The e/acc movement has its own Sam — Sam Altman, the 38-year-old face of the generative AI boom and former OpenAI CEO who was fired by the company’s board on Friday (Nov. 17), and given a new CEO position at a yet-to-be-named Microsoft AI division Sunday (Nov. 19).
On Monday (Nov. 20), over 500 of OpenAI’s around 770 employees threatened to leave the company if the current board doesn’t reinstate Altman as CEO and resign themselves.
OpenAI, for its part, has announced former Twitch CEO Emmett Shear as its newest CEO, the third in three days.
The messy — and ongoing — corporate divorce story has thrown a spotlight on the internal tensions between OpenAI’s nonprofit origin story and its increasingly profitable and powerful for-profit generative AI product line.
At the center of it all is a seemingly simple question that is seemingly impossible to answer: what does it mean to be good for humanity?
Read also: The Existential Threat That Microsoft Missed — and Could Put Its GenAI Future at Risk
When OpenAI was first formed in 2015, the company’s structure was purpose-built around the specific goal of responsibly achieving an artificial general intelligence (AGI) that would benefit humanity.
Altman’s departure is rumored to have sprung from a long-simmering disagreement over the speedy deployment and commercialization of the company’s public-facing ChatGPT and other generative AI products.
As PYMNTS CEO Karen Webster wrote, before today and after the ousting of Altman and resignation of Co-founder Greg Brockman, the OpenAI board consisted of four people, including its third co-founder. They were tech-heavy and public policy-focused — and remarkably light on the business skills essential to monetizing and scaling Generative AI.
But none of that should be surprising given its stated mission: the nonprofit’s principal beneficiary is humanity, not OpenAI investors or customers. It’s not clear from reading the board manifesto what “humanity” as the principal beneficiary means, nor how the board would weigh lost jobs from AI against lives saved from medical advances, for example.
Generative AI is a transformative innovation, and its backers within the e/acc “movement” and elsewhere see the productivity gains from AI was the next great leap forward for capitalism.
SoftBank founder Masayoshi Son has argued that people who avoid AI and those who use it will be as different as apes and humans in intellectual abilities.
Marc Andreessen, the billionaire founder of VC firm Andreessen Horowitz, has even penned a “techno-optimist manifesto” where he claims: “We are being lied to. We are told that technology takes our jobs, reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything.”
“We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone,” Andreessen added, espousing a belief that innovation and capitalism should be maximally exploited to achieve radical social change, even at the expense of social stability in the present.
Emmett Shear, OpenAI’s new CEO, summed up the tech sector’s cultural rift in a Friday post on X, before his installment atop the AI company, claiming that people either considersAI to be equivalent to the internet, or a million times more powerful, and that based on that, one either wants to accelerate the field or slow it down.
wake up babe, AI faction compass just became more relevant pic.twitter.com/MwYOLedYxV
— Emmett Shear (@eshear) November 18, 2023
It remains to be seen where he will fall on his own quadrant when he takes the reins at OpenAI.
Read also: Calls to Pause AI Development Miss the Point of Innovation
The OpenAI situation is a messy one, and cannot be cleanly broken down into boxes like “scientists want safety vs. investors want innovation,” or “accelerationists vs. decelerationists.”
But it begs the question — when it comes to advances like AI, who gets to decide what is “good” and what is “bad?” After all, that very question has been plaguing humanity since we first gained consciousness.
It is a question being played out in real-time across the entire AI ecosystem. Meta, for example, has re-assigned its Responsible AI team to other duties, while on Sunday (Nov. 19) Kyle Vogt resigned as CEO of autonomous driving company (and GM subsidiary) Cruise.
The resignation came after one of the company’s driverless cars dragged a San Francisco pedestrian for about 20 feet after hitting her and failed to disclose the full extent the event. The situation put Cruise in the crosshairs of regulators.
Yet, human drivers kill tens of thousands of people in America alone every year. The classic pitch for self-driving technology, that it has the potential to prevent many of these deaths, is still an appealing one — just as the benefits of AI more broadly are still appealing ones.
The moral and ethical considerations that circle today’s most cutting-edge innovations are important, and thorny ones.
Each day we are faced with a new version of the famous Trolley Problem, a question with no answer, and whose ongoing purpose is provoke thought and discourse.