Visa Embedded Lending June 2024 Banner

US-UK Alliance Pioneers AI Safety Tests, Earning Expert Praise

In a landmark move, the U.S. and U.K. have joined forces to develop safety tests for advanced artificial intelligence (AI), an initiative experts widely applaud as a critical step forward.

The agreement aims to align the two countries’ scientific approaches and accelerate the development of robust evaluation methods for AI models, systems and agents. It’s part of a growing global effort to address concerns about the safety of AI systems. 

“This new partnership will mean a lot more responsibility being put on companies to ensure their products are safe, trustworthy, and ethical,” AI ethics evangelist Andrew Pery of global intelligent automation company ABBYY told PYMNTS.

“The inclination by innovators of disruptive technologies is to release products with a “ship first and fix later” mentality to gain first mover advantage. For example, while OpenAI is somewhat transparent about the potential risks of ChatGPT, they released it for broad commercial use with its harmful impacts notwithstanding.”

Keeping AI Safe

The partnership follows commitments made at the AI Safety Summit in November 2023, where global leaders discussed the need for international cooperation in addressing the potential risks associated with AI technology. The summit, held in Bletchley Park, U.K., brought together representatives from governments, industry, academia, and civil society to discuss the challenges and opportunities presented by AI.

Under the terms of the agreement, the U.S. and U.K. AI Safety Institutes will collaborate to build a common approach to AI safety testing and share their capabilities to tackle the risks effectively. The institutes will conduct at least one joint testing exercise on a publicly accessible model and explore personnel exchanges to leverage collective expertise.

“AI is the defining technology of our generation,” U.S. Commerce Secretary Gina Raimondo said in a statement. “This partnership is going to accelerate both of our institutes’ work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren’t running away from these concerns — we’re running at them. Because of our collaboration, our institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”

AI Concerns

AI has been a growing concern in recent years, as the technology has advanced rapidly and become increasingly integrated into various aspects of society. While AI has the potential to bring significant benefits, such as improved healthcare, more efficient transportation, and personalized education, it also poses risks that must be carefully managed.

One of the primary concerns surrounding AI is the potential for bias and discrimination. AI systems are trained on data sets that may contain inherent biases, which can lead to unfair treatment of certain groups. For example, facial recognition systems have been shown to have higher error rates for people with darker skin tones, leading to potential misidentification and wrongful arrests.

Pery said that AI is proven to amplify biases that span facial recognition, employment, credit and criminal justice, which profoundly impact and marginalize disadvantaged groups.

“Placing the burden on users and consumers who may be adversely impacted by AI results is unfair,” he added. “In many instances, AI harms may be uncontestable as consumers lack visibility to how AI systems work. A partnership between the U.S. and the U.K. that addresses this problem is extremely important in protecting the general public and promoting governance and best practices.”

Another concern is the potential for AI to be used for malicious purposes, such as cyberattacks, disinformation campaigns, and autonomous weapons. As AI becomes more sophisticated, it may be possible for bad actors to exploit the technology to cause harm on a large scale.

Nicky Watson, co-founder and chief architect at the AI firm Cassie, told PYMNTS that automated decisions by AIs also have potentially huge consequences for individuals. 

“It’s important that businesses can explain how these decisions were made, where the information came from, and provide avenues for individuals to request a review, where potentially incorrect data has been used,” she said. “This is already provisioned under Europe’s GDPR [General Data Protection Regulation], and U.S. businesses can expect similar customer expectations in the future.”

Global Effort to Make Safe AI

Governments and organizations worldwide have been working to develop guidelines and principles for responsible AI development and deployment to address these concerns. In 2019, the Organisation for Economic Co-operation and Development (OECD) released the OECD Principles on Artificial Intelligence, which provide a framework for the responsible development and use of AI. The principles emphasize the importance of transparency, accountability, and human-centered values in AI development.

The U.S. and UK have led this effort, investing heavily in AI research and development. The U.S. National AI Initiative, launched in 2020, aims to maintain American leadership in AI through increased funding for research and development, workforce training, and international cooperation. Similarly, the UK’s AI Sector Deal, announced in 2018, seeks to position the country as a global leader in AI by investing in skills, infrastructure and research.

While the new partnership signifies a significant step forward in promoting ethical AI development, observers said its impact is not assured. The success of the agreement will depend on the implementation of robust safety protocols, regulatory frameworks, and ongoing collaboration. Zulfikar Ramzan, chief scientist and EVP, product and development at the AI firm Aura, told PYMNTS. 

“By sharing expertise and best practices, the alliance has the potential to mitigate AI risks and ensure that emerging technologies align with human values and security,” he added.