A PYMNTS Company

UK Lawmaker Introduces AI Regulation Bill in House of Lords

 |  March 4, 2025

Lord Holmes of Richmond has introduced the Artificial Intelligence (Regulation) Bill in the UK House of Lords, urging for immediate action to regulate AI and address the risks associated with the technology. Per a statement, the bill aims to fill the regulatory gaps that have emerged as AI continues to advance rapidly, with the government recently placing regulation on the back burner.

The bill proposes the creation of an AI Authority in the UK tasked with aligning sector-specific regulators and identifying gaps in existing oversight. This new authority would also be responsible for monitoring the economic risks tied to AI, conducting horizon scanning of developing technologies, and facilitating sandbox initiatives that allow for the testing of new AI models. Additionally, it would accredit AI auditors to ensure compliance with industry standards.

One of the central aspects of the bill is the establishment of regulatory principles that govern the development and use of AI. These principles include safety, security, transparency, fairness, accountability, governance, contestability, and redress. In addition, the bill calls for long-term public engagement on the risks and opportunities AI presents, particularly in terms of transparency around third-party data usage and ensuring informed consent regarding the use of intellectual property in AI training datasets.

Lord Holmes originally introduced the bill late in 2023, following the AI Safety Summit in Bletchley. However, it fell when Parliament was dissolved ahead of the general election. Since then, Lord Holmes has expressed concern over the government’s apparent shift away from its previous commitment to regulating AI in the UK. In a statement to Computing, he explained, “Weeks after the AI Safety Summit and declaration, I introduced my private members bill – the AI (Regulation) Bill to Parliament. I drafted the Bill with the essential principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability running through it.”

Read more: Lawmakers Raise Concerns Over Potential Musk Influence on FAA Telecom Contract

He also stressed the urgency of addressing the pace at which AI is evolving, noting that it is advancing far faster than the legislative processes are able to keep up with. “The intention of my Bill is to do precisely what was promised in that declaration, to ensure the ‘human-centric, trustworthy and responsible’ use of AI,” Lord Holmes stated. He further criticized the government’s change in approach, suggesting that the current administration is now siding with the US and Big Tech, with no clear sign of the promised binding AI regulations.

Although the bill acknowledges the theoretical risks posed by AI, Lord Holmes’ focus remains on the real-world impact the technology is already having on people’s lives. Last week, he published a report detailing the current challenges faced by individuals as AI technology continues to evolve without sufficient regulation. The report highlights issues such as bias in AI algorithms, disinformation from synthetic imagery, scams utilizing voice-mimicking technology, copyright theft, and unethical chatbot responses.

Lord Holmes elaborated on the importance of these issues, explaining that the report considers the real-world experiences of individuals facing the consequences of under-regulated AI. He specifically cited the case of jobseekers, noting that AI is increasingly being used in recruitment processes, yet there are currently no specific laws to govern how AI is applied in employment decisions. According to a statement, the report highlights how the proposed AI Regulation Bill would address these challenges and provide the necessary protections for individuals.

Source: Computing