US Senators Admit Tech Oversight Failures, Pledge Stronger Approach With AI 

The U.S. government’s track record of keeping up with innovative technology is relatively lackluster.

No new law meant to regulate the tech sector has been passed at the federal level since Microsoft’s infamous antitrust lawsuit.

And during Tuesday’s (Sep. 12) two-plus hour long hearing on “Oversight of AI: Legislating on Artificial Intelligence,” held by the Senate Judiciary subcommittee on privacy, technology and the law, policymakers admitted as much.

“Congress outsourced social media to the biggest corporations in the world, which has been a disaster,” said Sen. Josh Hawley, R-MO, the subcommittee’s ranking member.

“We need to learn from our experience with social media. If we let this horse get out of the barn, it will be even more difficult to contain than social media… We are dealing with those harms now,” emphasized Sen. Richard Blumenthal, D-CT, the chair of the subcommittee.

That’s why lawmakers are trying to get it right with artificial intelligence (AI), whose complex capabilities raise equally complex issues across a bevy of policy areas that lawmakers around the world are scrambling to deal with.

“Our interest is in [AI] legislation… Hearings are a means to that end,” Blumenthal said in his opening remarks.

Prior to the hearing, Blumenthal and Hawley unveiled a one-page framework for regulating AI, which was referenced repeatedly throughout the meeting.

The proposal calls for an AI licensing regime to be administered by an independent body, as well as for Congress to ensure that AI companies are legally liable for the harms of their AI systems.

Read alsoTech Companies Point to Self-Regulatory Strategies Before Senate AI Hearings

Tech Leaders Call For A Human-in-the-Loop 

The witnesses for Tuesday’s hearing included NVIDIA’s chief scientist and senior vice president of research William DallyBrad Smith, the vice chair and president of Microsoft; and Woodrow Hartzog, a professor of law at Boston University focusing on privacy and technology law.

“Uncontrollable general AI is science fiction. At the core, AIs are based on models created by humans. We can responsibly create powerful and innovative AI tools,” Dally said.

The NVIDIA executive emphasized that no nation or company is able to control a chokepoint for developing AI, while noting that “the genie is already out of the bottle.”

“AI models are portable; they can go a USB drive and can be trained at a data center anywhere in the world,” Dally said. “We can regulate deployment and use but cannot regulate creation. If we do not create AIs here, people will create them elsewhere. We want AI models to stay in the U.S., not where the regulatory climate might drive them.”

“To keep the threats of AI as science fiction, we must keep AI under the control of people,” Microsoft’s Smith said. “If a company wants to use AI to, say, control the electrical grid or all of the self-driving cars on our roads or the water supply … we need a safety brake, just like we have a circuit breaker in every building and home in this country to stop the flow of electricity if that’s needed.”

“There is no such thing as a neutral technology. Lawmakers should embrace existing laws like product liability and consumer protection and apply them to AI,” Hartzog said.

Read moreHow AI Regulation Could Shape Three Digital Empires

No Such Thing as a Neutral Technology

Hartzog said that the U.S. government should “flat out ban extremely dangerous or risky uses of AI, including biometric tracking, predictive policing, social scores… Facial recognition and biometric recognition tech should be prohibited outright. Also emotion recognition. We need bright line measures against these rather than procedural protections.”

Sen. Mazie Hirono, D-HI, asked the witnesses how the U.S. could confirm whether foreign governments were using AI to create disinformation.

Microsoft’s Smith replied that he prefers labeling AI-generated content, and expressed worry that if Microsoft were to take down content it would be accused of “censoring.”

When asked by Sen. John Kennedy, R-LA, whether consumers had a “right to know” if they were viewing content created by AI, the witnesses explained that it depended on the context.

Hartzog said that AI regulation should “use disclosures where effective; if disclosure are not effective, make it safe; if you can’t make it safe, it shouldn’t exist.”

Smith repeatedly expressed support for creating a licensing agency for “advanced AI in high-risk scenarios.”

“To prevent government from trampling innovation with their AI licensing regime, it should follow the civil aviation model of industry standards, national regulation and international coordination,” Smith said. He also pointed to the SWIFT financial system as a successful model.

“An AI model for medical procedures is high risk, so it should be licensed. A different model for controlling temperature in your apartment is less of a big deal to get wrong, and not life threatening. Regulate models which have high consequences if they go awry,” Dally said.

Remarks on AI replacing workers drew some controversy.

“AI will replace drive-through workers,” Smith said, adding that drive-through work does not require creativity. He expressed hope that AI could automate “routine and boring work” to free people up to be more creative. 

In response, Hawley said called the idea “tech elitism,” saying AI shouldn’t take drive-through jobs.

Washington is going all-in on AI this week, with a closed-door meeting with 22 AI experts expected on Wednesday (Sept. 13) and another hearing Thursday (Sept. 14). It remains to be seen whether something will come of it. To date, the U.S. has taken a sectoral approach to regulating tech innovations, focusing on specific risks within specific industries, rather than choosing to paint a policy framework with a broad brush.