
Sixteen prominent companies leading the charge in Artificial Intelligence (AI) development have made a resolute pledge to global leaders to prioritize the safe advancement of this transformative technology. The commitment comes amidst a backdrop of rapid innovation that outpaces regulatory frameworks, raising concerns about emerging risks.
According to a report by Reuters, the pledge was made during a global meeting, where industry giants such as Google, Meta, Microsoft and OpenAI, alongside firms from China, South Korea and the United Arab Emirates, joined forces.
This coalition was supported by a broader declaration from influential entities including the Group of Seven (G7) major economies, the European Union (EU), Singapore, Australia and South Korea. The virtual meeting, hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, served as a platform to underscore the importance of AI safety, innovation and inclusivity.
Emphasizing the urgency of the matter, President Yoon highlighted how AI safety is essential to societal wellbeing and democracy, citing concerns over risks like deepfake technology. The agreement reached at the meeting prioritized AI safety, innovation and inclusivity, as per South Korea’s presidential office.
Related: New Report Says AI Regulations Lag Behind Industry Advances
Participants stressed the significance of interoperability between governance frameworks, proposed the establishment of a network of safety institutes and advocated for engagement with international bodies to strengthen collective efforts in addressing AI-related risks effectively.
Among the companies committing to ensuring AI safety were notable names such as Zhipu.ai, supported by China’s tech giants Alibaba, Tencent, Meituan and Xiaomi, as well as the UAE’s Technology Innovation Institute, Amazon, IBM and Samsung Electronics, as reported by Reuters. These entities pledged to publish safety frameworks for assessing risks, steer clear of models where risks couldn’t be adequately mitigated and uphold principles of governance and transparency.
Commenting on the declaration, Beth Barnes, founder of METR, a group dedicated to promoting AI model safety, underscored the necessity of international consensus to define “red lines” beyond which AI development could pose unacceptable risks to public safety, according to Reuters.
Source: Reuters
Featured News
FTC v. Meta Trial Turns to Market Definition
Apr 28, 2025 by
CPI
Marriott to Acquire CitizenM for $355 Million, Expanding Urban Lifestyle Offerings
Apr 28, 2025 by
CPI
Thomson Reuters Urges Third Circuit to Block Ross Intelligence’s Copyright Appeal
Apr 28, 2025 by
CPI
Merck KGaA to Acquire SpringWorks for $3.9 Billion
Apr 28, 2025 by
CPI
Federal Judge Dismisses Mario Chalmers’ Antitrust Lawsuit Against NCAA Over NIL Rights
Apr 28, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Mergers in Digital Markets
Apr 21, 2025 by
CPI
Catching a Killer? Six “Genetic Markers” to Assess Nascent Competitor Acquisitions
Apr 21, 2025 by
John Taladay & Christine Ryu-Naya
Digital Decoded: Is There More Scope for Digital Mergers In 2025?
Apr 21, 2025 by
Colin Raftery, Michele Davis, Sarah Jensen & Martin Dickson
AI In the Mix – An Ever-Evolving Approach to Jurisdiction Over Digital Mergers in Europe
Apr 21, 2025 by
Ingrid Vandenborre & Ketevan Zukakishvili
Antitrust Enforcement Errors Due to a Failure to Understand Organizational Capabilities and Dynamic Competition
Apr 21, 2025 by
Magdalena Kuyterink & David J. Teece