Can China’s First-Mover Advantage in Regulating AI Keep Innovation Alive?

China AI

As of this week, artificial intelligence (AI) is no longer a globally unregulated entity.

This, as China’s new rules governing the disruptive technology — composed of two dozen guidelines — go into effect Tuesday (Aug. 15).

They represent the first set of regulations from a major market economy being brought to bear on the emergent AI industry, whose hyper-rapid commercialization since the launch of OpenAI’s ChatGPT product this fall both captured the public imagination and catapulted tech sector valuations to new heights.

Foreign firms, including those from the European Union (EU) and the U.S., will need to comply with the set of 24 guidelines from China’s top internet watchdog, the Cyberspace Administration of China (CAC), if they want to do business within the nation’s borders.

And with a population of more than a billion, China is an attractive place to do business for tech platforms built explicitly to monetize scalability.

Leading Western economies, by contrast, are lagging when it comes to AI regulation — preferring, for now, to take a more hands-off approach to policing the risks of the new technology.

That is why, tempting as it may be to write off China’s AI regulations as irrelevant given the government’s near-authoritarian power, or to view them solely through the lens of a geopolitical competition, the list of guardrails should be taken seriously — and even studied for any clues Beijing’s approach can offer other nations.

After all, China is the largest producer of AI research in the world. Its approach to AI oversight will certainly add necessary context around the underlying structures and technical feasibility of different regulatory approaches.

Read more: How AI Regulation Could Shape Three Digital Empires

Beijing Takes a State-Centric Approach to AI Regulation

China’s fresh set of regulations underscore the delicate balance act the nation’s ruling government is taking in trying to balance state control of AI technology with enough support that domestic companies can still compete globally and innovate at home.

Beijing plays by different rules than Washington when it comes to oversight of its tech companies, who are largely powerless when it comes to responding to or fighting back against government reprimands or oversight.

Beijing has established firm rules around ensuring that content generated by generative AI is in line with the nation’s core socialist values.

Maintaining social control through information access is a crucial goal of Beijing’s AI policy, which requires any company introducing an AI model that can be used by the domestic public to train their models on “legitimate data,” and to disclose that data to state regulators during spot checks.

Earlier this year, an AI chatbot from Yuanyu Intelligence was shut down just days after its launch after the AI platform referred to Russia’s attack on Ukraine as a “war of aggression,” a point of view at odds with China’s officially espoused stance on the conflict.

In response, some Chinese firms are now using one large language model (LLM) to filter the content of their external-facing LLM product in order to ensure it is scrubbed and censored of any content that the state may find controversial.

The draft version of the 24 regulations, which were released this spring, included specific monetary fines for firms that deviates from the guidelines, but have since been relaxed to allow AI providers with a three-month grace period to fix their models to adhere to party lines.

AI-produced content “shall not … subvert the state power,” the CAC stated.

As PYMNTS originally reported, only firms planning to offer services to the Chinese domestic public need to submit security assessments, which suggests that firms working on enterprise-facing products or those intended for use by overseas users would be given more leeway, at least for now. Indeed, most Chinese tech companies — including Baidu, Alibaba and JD.com — have so far focused primarily on developing AI applications designed solely for industrial and enterprise use.

See also: From PopeGPT to the Pentagon: All Eyes on Gen AI Oversight

Washington Waits for the Dust to Settle Around AI

Countries worldwide are starting to ask the same questions about how to tackle regulating AI technology, and many of them are moving faster than the U.S.

The U.S. has historically regulated technical innovations sector by sector, rather than by overall capability, and approach that generally results in the limited use of certain technologies within specific industries (such as facial recognition or other biometrics) rather than that technology being banned outright.

America has no pending AI legislation under any sort of serious consideration, despite several AI briefings being held this summer as lawmakers sought to get up to speed around regulating the technology.

Senate Majority Leader Chuck Schumer has described China’s efforts as a “wake up call to the nation,” warning that “urgent action is required for the U.S. to stay ahead of China and shape and leverage this powerful technology.”

Seven of America’s leading AI companies have voluntarily committed to safely, securely and transparently develop their AI platforms and products after a meeting at the White House last month (July 21), but the non-binding guardrails they agreed to are about as far as Washington has moved toward any sort of comprehensive policy.

Observers have noted that many of the practices agreed to don’t represent new regulations and were already in place at many AI companies.

“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania Law Professor Cary Coglianese told PYMNTS earlier this month as part of the “TechReg Talks” series presented by AI-ID. “It’s not one static thing.”

Western observers believe that there must be an ongoing process of interaction between governments, the private sector and other relevant organizations for AI regulation to be effectively implemented in the U.S. — something that won’t magically happen with a single piece of legislation.

But China’s regulations will necessarily create new bureaucratic and technical tools around disclosure requirements, model auditing mechanisms, and technical performance standards for AI platforms, all things that other nations can decide whether or not they want to implement themselves.

In previous discussions with PYMNTS, industry insiders have drawn parallels between the purpose of AI regulation in the West to both a car’s airbags and brakes and the role of a restaurant health inspector.

It remains up to governments and regulators around the world to strike a regulatory balance that protects consumer privacy without hampering private sector innovation and growth — and how the chips fall going forward will no doubt shape the next decade or more of international engagement.