What’s Missing from America’s AI Safety Pledge? Any Mention of EU 

AI techreg

The U.S. has historically regulated technical innovations sector by sector, rather than by overall capability.

This generally results in the limited use of certain technologies within specific industries (such as facial recognition or other biometrics) rather than seeing that technology banned outright.

But as it relates to artificial intelligence (AI), America may need to paint with a broader brush.

That’s because, coming out of a meeting last Friday (July 21) where seven leading AI companies made eight promises about what they’ll do with their technology, the White House — and by extension, the entire nation — is just at the beginning of the rulemaking process for any AI-focused regulatory framework.

Senior representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all met with the Biden Administration and voluntarily committed to help move toward safe, secure and transparent development of AI technology.

“As we advance this agenda at home, the Administration will work with allies and partners to establish a strong international framework to govern the development and use of AI worldwide,” the White House said in a public release.

The Administration noted that it had “already consulted” on the requirements of the voluntary commitments made by some of the AI sector’s top companies with other nations, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the U.K.

Notably absent from the list of peer nations? Any mention of Brussels or China, both of whom are moving forward with their own view toward regulating the emergent AI industry.

See AlsoHow AI Regulation Could Shape Three Digital Empires

A Long and Difficult Path Toward AI Rulemaking

Many of the seven companies issued their own statements, saying they would work with the White House while also emphasizing that the guardrails agreed upon were voluntary and non-binding.

“The companies developing these pioneering technologies have a profound obligation to behave responsibly and ensure their products are safe,” the White House said.

“This process, coordinated by the White House, is an important step in advancing meaningful and effective AI governance, both in the US and around the world,” wrote OpenAI.  

“By moving quickly, the White House’s commitments create a foundation to help ensure the promise of AI stays ahead of its risks,” said Microsoft President Brad Smith.

“It takes a village to craft commitments such as these and put them into practice,” Smith added.

The voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices meant to propel the whole ecosystem forward in a way that reinforces the safety, security and trustworthiness of AI technology.

A recent investigation by the Federal Trade Commission (FTC) into practices at OpenAI highlights some of the primary risks firms at the AI ecosystem’s forefront face when developing their AI products with little oversight.

Read MoreUN Security Council Wants to ‘Exercise Leadership’ in Regulating AI

Making AI Safer, More Secure, and More Beneficial to the Public

Countries worldwide are starting to ask the same questions about how to tackle regulating AI technology, with China being the first major market economy to pass an interim set of rules governing the technology’s applications.

Even the United Nations Security Council held its first high-level briefing last week (July 18) to discuss the threat AI could pose to international peace and stability, with the organization’s Secretary General António Guterres believing in the need for a globally coordinated approach to both reining in AI’s potential perils and supporting its potential good. 

The set of principles designed by the US to make AI technologies safer, and agreed to by the companies, include third-party security checks and require content produced by AI to be watermarked to help stem the spread of misinformation.

Observers have noted that many of the practices agreed to were already in place at many AI companies and don’t represent new regulations.

The commitment to self-regulation also drew criticism from consumer groups, including the Electronic Privacy Information Center (EPIC).

“While EPIC appreciates the Biden Administration’s use of its authorities to place safeguards on the use of artificial intelligence, we both agree that voluntary commitments are not enough when it comes to Big Tech. Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent, and protects individuals’ privacy and civil rights,” said Caitriona Fitzgerald, Deputy Director at EPIC.

Industry insiders have drawn parallels between the purpose of AI regulation in the West to both a car’s airbags and brakes and the role of a restaurant health inspector in previous discussions with PYMNTS.