MAIN NEWSLETTER SIGN UP

Stability AI CEO Resigns and Calls for ‘Transparent’ AI Governance

Stability AI’s founder and CEO has resigned to pursue decentralized artificial intelligence (AI).

Emad Mostaque has stepped down as the AI firm’s chief executive and from the company’s board, Stability announced Friday (March 22) on the company’s blog. 

“I am proud two years after bringing on our first developer to have led Stability to hundreds of millions of downloads and the best models across modalities,” Mostaque said. “I believe strongly in Stability AI’s mission and feel the company is in capable hands. It is now time to ensure AI remains open and decentralized.”

The company has appointed Chief Operating Officer Shan Shan Wong and Chief Technology Officer Christian Laforte as interim co-CEOs.

Writing on X, Mostaque argued it’s not possible to beat “centralized AI” with more “centralized AI,” which TechCrunch — which flagged his resignation — says is a reference to the ownership structure of top AI companies like OpenAI and Anthropic.

He added that he stepped down voluntarily, as he held the largest number of controlling shares in Stability.

“We should have more transparent & distributed governance in AI as it becomes more and more important. It’s a hard problem, but I think we can fix it …” Mostaque said. “The concentration of power in AI is bad for us all. I decided to step down to fix this at Stability & elsewhere.”

As the TechCrunch report noted, this was one of two major AI company shake-ups this week, which also saw Microsoft hire DeepMind Co-Founder Mustafa Suleyman to head its consumer AI unit, along with most of the staff of Suleyman’s Inflection AI startup.

Elsewhere in the AI space, PYMNTS wrote last week about new research into a training method called “Quiet-STaR” that improves the reasoning abilities of AI systems. 

The approach involves telling AI to generate multiple internal rationales before responding to conversational prompts, similar to how humans think before speaking. 

“The performance of AI will significantly improve if it can think like a human,” Venky Yerrapotu, the CEO of 4CRisk, which develops AI products for the compliance and risk sector, said in an interview with PYMNTS.

“Human-like thinking is unique and complex and communicates with context, nuance and implied meanings. AI with the capability to seamlessly understand human intent (and we are seeing LLMs [large language models] getting to this stage) can execute complex queries.”