Visa The Embedded Lending Opportunity April 2024 Banner

Report: Meta Reassigns Responsible AI Team to New Duties

Meta is reportedly reassigning its Responsible AI team to other in-house artificial intelligence (AI) projects.

The workers will continue to work to prevent AI-related harm, Reuters reported Saturday (Nov. 18), citing a statement from a Meta spokesperson. 

That spokesperson says the tech giant aims to bring staff closer to the creation of core products, with the bulk of the Responsible AI team moving to generative AI “and will continue to support relevant cross-Meta efforts on responsible AI development and use,” and others going to AI infrastructure.

“We continue to prioritize and invest in safe and responsible AI development and these changes will allow us to better scale to meet our future needs,” the spokesperson said.

Meta officials said during an earnings call last month that AI will be the company’s focal point in the year ahead. 

“In terms of investment priorities,” CEO Mark Zuckerberg said, “AI will be our biggest investment area in 2024 — in engineering and computing resources.”

That means de-prioritizing at least some non-AI projects in favor of those focused on the emerging and advanced technology — and hiring for AI roles in 2024.  

Speaking with analysts, CFO Susan Li took a query about AI’s continued use in advertising and in new use cases.

“You’ll see that we have been increasingly testing these in our AI sandbox,” said Li. “As they become more mature, we’ll incorporate them into our ads manager directly.”

The company’s latest move is happening at a moment when voluntary commitments around AI safety have become “all the rage,” as PYMNTS wrote last week, following the signing of such a commitment by a group of venture capital (VC) firms.

The commitment, announced last week, focuses on five main points: an agreement to responsible AI, proper transparency and documentation, risk and benefit forecasting, audit and testing, feedback cycle and ongoing improvements.

“The VC-signed voluntary agreement is meant to demonstrate leadership from the private sector around controlling for AI’s risks, but it has sparked a debate among AI founders, with some in the AI field even pulling out of scheduled meetings with VCs,” PYMNTS wrote.

These founders have spoken of “how public statements like RAI endanger open-source AI research and contribute to regulatory capture.”

Or as one Web3 CEO put it: “Thanks, very helpful list to not accept money from.”