The Race to Regulate AI Risks Could Reshape Global Markets

artificial intelligence

While American companies lead the artificial intelligence (AI) development charge, U.S. policymakers risk falling behind.

The dangers inherent to the abuses of AI technology, including inequitable discrimination and algorithm bias, disinformation and fraud, among others, make it imperative that governments move to regulate the technology appropriately, and fast.

China is this week (May 10) moving on from the consultation period of its second round of generative AI regulation. The proposed framework builds on pre-established rules agreed to in 2022 meant to regulate deepfakes.

The bulk of the breakthroughs in generative AI technology have happened within the U.S., but China leads America in consumer adoption of the technology, and market observers believe that leaders in Beijing are hoping faster-paced AI regulation efforts will drive even further uptake.

Microsoft’s China-focused AI chatbot, Xiaoice, has a user base that is almost double the size of the American population.

The European Union (EU) has also already made advances in establishing rules and regulations to oversee both AI’s impact and development, reaching a provisional political deal on an Artificial Intelligence rulebook scheduled for a deciding vote Thursday (May 11).

Read more: ‘Godfather’ of Neural Networks Changes His Mind About AI’s Potential

The U.K. Competition and Markets Authority (CMA) said last week (May 4) that it would look at the underlying systems and foundational models behind AI tools, including the use of large language models (LLMs) in what observers believe to be a pre-warning to the emergent sector.

Per the government statement, the CMA will review how the innovative development and deployment of AI can be supported against five key principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

The CMA will publish its full findings later this fall, in September 2023.

Meanwhile, the U.S. is just getting started.

US Lawmakers Agree More Work Is Needed

As PYMNTS reported, the White House last week (May 4) met with senior leadership from the biggest technology companies at the forefront of developing AI tools and products, including OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Microsoft Chairman and CEO Satya Nadella and Google and Alphabet CEO Sundar Pichai.

The meeting focused on the need for companies to be transparent about their AI systems, the importance of being able to validate the safety, security and efficacy of the systems, and the need to keep the systems from malicious actors. It represented the current presidential administration’s most visible effort to-date to confront calls to regulate the industry.

After a brutal 2022 which saw sweeping headcount reductions and stock routs, many of the biggest tech companies, including Amazon, Microsoft, Alphabet and Meta, are doubling-down on their investments in future-fit investments across generative AI and machine learning (ML) as a way to spur future growth.

But while business interests are pushing AI forward, many academics, technologists and researchers continue to warn about the technology’s danger absent appropriate guardrails around its use.

Most recently, AI pioneer Dr. Geoffrey Hinton resigned from Google in order to “speak freely” about the potential risks that widespread integration of the technology may pose. Separately, Lina Khan, chair of the U.S. Federal Trade Commission (FTC) penned an op-ed emphasizing the need for immediate AI regulation in order to safely develop the industry.

The Innovation Regulation Dilemma

Observers believe that, based on historic behavior, tech industry executives generally see regulation as a hindrance to AI development, and a drag on their potential profits and competitive differentiation.

Still, the first step for effective and proportionally appropriate AI regulations is for the tech industry itself to agree upon and implement common principles around transparency, accountability and explainability, and equitable fairness.

Chatbots, deepfake image generators, and AI-driven search platforms are quickly proliferating. A first step would be for businesses to agree to never pass off chatbots as humans, or deepfake media as anything but the creation of an AI tool, among other agreed-upon operational table stakes.

A next step would be for regulators across areas most impacted by AI’s potential for both progress and harm, including employment, privacy and human rights, data protection, entertainment, news, and media, as well as financial services and banking, to identify the risks that AI and model-bias might pose.

Separately, government agencies at both the federal and state level need to deepen their own technical expertise around AI.

Whoever gets to a productive and optimal framework for regulating AI first will likely be able to project those standards globally — opening the door to a lucrative and innovative market.