Why Europe Must End Its 30-Year Digital Winter to Ensure Its Long-Run Future

AI Firms Look to Placate Their Governments as US Eyes Legislation

It has been a packed week in Washington for generative artificial intelligence (AI).

Senate Majority Leader Chuck Schumer of New York kicked off his series of closed-door, bipartisan “AI Insight Forums” Wednesday (Sep. 13).

Twenty-two tech leaders and AI experts, among them some of the richest individuals in human history at the helm of many of the world’s most valuable companies, descended on D.C. to speak to all 100 senators about how to effectively legislate AI’s pitfalls without hamstringing its potential.

“Since taking office, President [Joe] Biden, Vice President [Kamala] Harris, and the entire Biden-Harris administration have acted decisively to manage the risks and harness the benefits of artificial intelligence (AI),” the White House said in a statement Tuesday (Sept. 12). “As the administration moves urgently on regulatory action, it is working with leading AI companies to take steps now to advance responsible AI.”

The Senate Judiciary Subcommittee on Privacy, Technology and the Law held an hours-long meeting Tuesday in which they grilled top executives from Microsoft and Nvidia, as well as a privacy and technology law expert, on how to legislate AI at a federal level.

The White House also said in its Tuesday statement that it had a second round of voluntary commitments from eight companies, including Nvidia, to drive the safe, secure and trustworthy development of AI technology.

Meanwhile, Chinese tech giant Alibaba said it would open its AI model to the public, Reuters reported Wednesday (Sep. 13). This implies the company has gained Beijing’s regulatory approval to commercialize its own generative AI.

Read also: Tech Companies Point to Self-Regulatory Strategies Before Senate AI Hearings

Establishing AI Standards for Safety, Security and Trust

AI’s fast-moving technology is notoriously hard to corral. But now a growing list of the tech sector’s biggest names have voluntarily agreed to self-regulate their innovations in the absence of any overarching federal policy.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all agreed to guidelines set by the White House in July, and now Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability have joined them.

Per the White House’s Tuesday statement, the firms have agreed to three overarching action items: ensuring their products are safe before introducing them to the public; building AI systems that put security first; and ensuring that their products can earn the public’s trust.

This includes, among other requirements, committing to developing watermarking systems for AI-generated content, making models auditable by third parties, best-practice information sharing, and submitting models for both internal and external testing.

“Uncontrollable general AI is science fiction,” Nvidia Chief Scientist and Senior Vice President of Research William Dally told U.S. lawmakers Tuesday. “At the core, AIs are based on models created by humans. We can responsibly create powerful and innovative AI tools.”

The White House also said in its Tuesday statement that it is developing an upcoming executive order on AI “to help America lead the way in responsible AI development.”

Still, certain nonprofit groups have written letters to lawmakers expressing their concern that tech companies are playing too influential a role within AI regulation discussions.

“Their voices can’t be privileged over civil society,” said the Center for AI and Digital Policy, an independent nonprofit research organization that assesses national AI policies and practices.

The group also objected to the Senate holding a closed-door meeting with tech leaders, saying that “the work of Congress should be conducted in the open.”

See also: From PopeGPT to the Pentagon: All Eyes on Gen AI Oversight

Fears of Regulatory Capture

“If you actually have the skills to regulate something like the AI industry, if you have some deeper knowledge, deeper understanding, then the most profitable jobs will be in the industry and not with the public regulators,” Dr. Johann Laux told PYMNTS last month.

That skill gap, as well as the increasing reliance of lawmakers on industry insiders for expertise, sits at the heart of ongoing fears by observers and civil groups that U.S. regulators may craft legislation and technical standards for AI that cater to the industry’s own interests, a phenomenon known as regulatory capture.

“For too long, we’ve misplaced trust in Big Tech to self-regulate and mistaken technological advances for societal progress,” wrote a group of tech sector nonprofits in their AI policy proposal called Zero Trust AI Governance.

Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS in June that industry players should approach lawmakers with the attitude of, “We know this is new, we know it’s a little bit spooky, let’s work together on rules, laws and regulations, and not just ask for forgiveness later because that will help us grow as an industry.”

“[AI regulation is] going to be an ongoing continuous process of interaction between government and the private sector to make sure that the public gets all of the benefits that can come from this technological innovation but also is protected from the harms,” Professor Cary Coglianese, founding director of the Penn Program on Regulation, told PYMNTS in August.

Still, lawmakers appear to be aware of the need to balance the necessity of seeking technical expertise with the protection of fundamental rights.

“Congress outsourced [regulating] social media to the biggest corporations in the world, which has been a disaster,” said Sen. Josh Hawley of Missouri during Tuesday’s Senate hearing.

“We need to learn from our experience with social media,” said Sen. Richard Blumenthal of Connecticut during the same hearing. “If we let this horse get out of the barn, it will be even more difficult to contain.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.