Jan Leike, who resigned from artificial intelligence (AI) company OpenAI earlier in May, has joined rival AI firm Anthropic.
Leike announced the move in a post on X (formerly Twitter) on Tuesday (May 28), noting that he will work on “scalable oversight, weak-to-strong generalization, and automated alignment research.”
Anthropic is backed by a $4 billion investment from Amazon and is behind the generative AI chatbot Claude, its answer to OpenAI’s ChatGPT.
Anthropic has drawn other high-profile personnel to its ranks lately, such as Instagram Co-Founder Mike Krieger, who joined the company earlier this month as its chief product officer. It also tapped Airbnb veteran Krishna Rao as its chief financial officer, who joined the company last week.
While at OpenAI, Leike co-led the superalignment team, which focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us,” the company wrote in a blog post last July when it introduced the group.
The team was dissolved May 17, PYMNTS reported at the time, as its leaders resigned from their posts, with Leike citing disagreements with the company’s priorities, according to a May 17 post on X.
OpenAI also lost its co-founder and chief scientist, Ilya Sutskever, in the same week.
“After almost a decade, I have made the decision to leave OpenAI,” Sutskever wrote, per PYMNTS reporting at the time. “The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI [artificial general intelligence] that is both safe and beneficial under the leadership of [CEO Sam Altman, President Greg Brockman and Chief Technology Officer Mira Murati].”
Sutskever also led the superalignment team with Leike.
In response to their departures and the termination of the AI safety team, Altman and Brockman wrote in a joint message on X on May 18 that they were aware of the risks and potential of AGI, saying the company had called for international AGI standards and helped “pioneer” the practice of examining AI systems for catastrophic threats.
“Second, we have been putting in place the foundations needed for safe deployment of increasingly capable systems,” the executives wrote.
“Figuring out how to make a new technology safe for the first time isn’t easy. For example, our teams did a great deal of work to bring GPT-4 to the world in a safe way, and since then have continuously improved model behavior and abuse monitoring in response to lessons learned from deployment.”