Meta and OpenAI CEOs Back EU Artificial Intelligence Rules

The heads of Meta and OpenAI have shown support for government artificial intelligence (AI) regulations. 

Meta CEO Mark Zuckerberg and Sam Altman, chief executive of OpenAI, voiced their support for state-sponsored AI guidance following discussions with European Commissioner Thierry Breton, Bloomberg News reported Friday (June 23).

Breton said he and Zuckerberg were “aligned” on the European Union’s (EU) AI regulations, with the two agreeing on the EU’s risk-based approach and to measures like watermarking.

Altman, meanwhile, said he looks forward to working with the EU on AI regulations. Bloomberg noted that the discussions were part of Breton’s tour of tech companies. Following his meetings, he said Meta seemed prepared to meet Europe’s new AI rules, though the company will undergo a stress test of its systems in July.

Breton also met earlier this year with Google CEO Sundar Pichai, who also agreed to a need for voluntary rules around AI.

Earlier this month, the European Parliament approved a draft law known as the A.I. Act, considered the world’s first set of comprehensive AI rules. The final law is expected to be approved early next year, if not by the end of 2023.

As PYMNTS wrote, the EU’s proposed legislation would limit some uses of the technology and would classify AI systems according to four levels of risk, from minimal to unacceptable. This approach will focus on applications that present the largest potential risk for human harm, similar to the drug approval process.

The AI systems in these sectors — which include critical infrastructure, education, human resources, public order and migration management — will face strict requirements such as transparency and accuracy in data usage.

Companies that violate the regulations could face fines of up to €30 million ($33 million) or 6% of their annual global revenue.

Last week brought reports that OpenAI successfully lobbied for changes to the act to reduce the regulatory burdens the company would have faced.

For example, the company reportedly successfully argued that its general-purpose AI systems should not be included in the A.I. Act’s high-risk category.

Meanwhile, PYMNTS looked recently as the potential for generative AI to further power mobile robots in a range of industries.

“The rapid pace of advancement has led many to believe that this particular moment in time represents the perfect intersection where a robot body with an AI mind could be a functional reality,” the report said. 

“With warehouse suppliers in the U.S. predicting they will run out of people to hire by 2024, the timing for bringing to market an intelligent, all-purpose robot couldn’t be better.”