UK’s AI Safety Institute to Open US Office Amid Growing Calls for Global Collaboration

Britain’s AI Safety Institute announced it will open an office in the U.S. The new branch, located in San Francisco, aims to foster deeper collaboration with American counterparts and enhance the institute’s efforts to manage the rapid development of AI technologies.
The U.S. office is set to open this summer and will recruit a team of technical staff to work alongside the organization’s London base. Britain’s government officials highlighted that this expansion is a strategic step to strengthen ties and facilitate greater international dialogue on AI safety.
The announcement, reported by Reuters, comes at a critical time. AI is advancing swiftly, with some experts warning it could pose existential threats on par with nuclear weapons or climate change. Such warnings underscore the urgent need for coordinated global regulation of AI technology.
Read more: AI Regulation In Healthcare: UK And EU Approaches
The timing of the announcement is notable, as it precedes the second global AI safety summit. This summit, co-hosted by the British and South Korean governments, will take place in Seoul this week. The event aims to build on the discussions from the first AI safety summit held at UK’s Bletchley Park a year ago, where global leaders and top executives, including U.S. Vice President Kamala Harris and OpenAI’s Sam Altman, convened to address AI safety and regulation.
The initial summit saw significant international engagement, with China co-signing the “Bletchley Declaration” alongside the U.S. and other nations, signaling a rare moment of collaboration amidst geopolitical tensions.
Britain’s technology minister, Michele Donelan, emphasized the importance of this transatlantic collaboration. “Opening our doors overseas and building on our alliance with the U.S. is central to my plan to set new, international standards on AI safety, which we will discuss at the Seoul summit this week,” Donelan stated.
The institute’s initiative comes in the wake of increasing public and professional scrutiny of AI. Shortly after OpenAI, backed by Microsoft, released ChatGPT in November 2022, a wave of concern swept through the tech community. High-profile figures, including Tesla CEO Elon Musk, signed an open letter calling for a six-month pause in AI development, citing unpredictable and potentially dangerous consequences.
Source: Reuters
Featured News
Australia’s Major Supermarkets Face Scrutiny Over Profit Margins Amid Rising Prices
Mar 21, 2025 by
CPI
Fired FTC Commissioners Warn of Potential White House Influence Over Mergers
Mar 20, 2025 by
CPI
Dr. Matthew Backus Joins Compass Lexecon as an Affiliate
Mar 20, 2025 by
CPI
UK to Boost Broadband Competition While Capping Openreach Charges, Says Ofcom
Mar 20, 2025 by
CPI
Singapore Competition Watchdog Yet to Receive Formal Notification on Grab-GoTo Merger
Mar 20, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Self-Preferencing
Feb 26, 2025 by
CPI
Platform Self-Preferencing: Focusing the Policy Debate
Feb 26, 2025 by
Michael Katz
Weaponized Opacity: Self-Preferencing in Digital Audience Measurement
Feb 26, 2025 by
Thomas Hoppner & Philipp Westerhoff
Self-Preferencing: An Economic Literature-Based Assessment Advocating a Case-By-Case Approach and Compliance Requirements
Feb 26, 2025 by
Patrice Bougette & Frederic Marty
Self-Preferencing in Adjacent Markets
Feb 26, 2025 by
Muxin Li