A growing chorus of lawmakers and technology leaders are calling for new regulations to govern the rapid rise of artificial intelligence (AI) as the transformative but largely unchecked technology permeates more aspects of daily life.
On Wednesday (May 15), a bipartisan group of senators unveiled a long-awaited plan to bolster U.S. AI efforts, calling for a significant funding boost while largely deferring on the thorny issue of regulating the rapidly advancing technology.
The legislative blueprint, laid out in a 20-page document titled “Driving U.S. Innovation in Artificial Intelligence,” urges ramping up government and private-sector AI research and development to $32 billion annually by 2026.
The plan was introduced by Senate Majority Leader Chuck Schumer, D-N.Y., and three colleagues.
U.S. politicians have been calling for more AI regulations for some time.
Sen. Mitt Romney demanded last month increased federal regulation of AI. However, such efforts are likely to encounter obstacles due to the technology’s rapid development and broad applications, making effective oversight a complex challenge, according to experts. While many acknowledged the potential risks associated with AI, they emphasized the difficulty in determining which AI systems require stringent regulation and underscore the need to strike a balance between fostering technological innovation and mitigating those risks.
AI regulation is facing headwinds at the state level. Colorado and Connecticut introduced legislation this year to become national leaders in regulating artificial intelligence, targeting companies that develop and utilize AI systems and prohibiting discrimination in critical services such as healthcare, employment and housing. However, Connecticut’s efforts collapsed after a veto threat from the Democratic Governor, citing concerns about stifling the nascent industry. At the same time, Colorado’s bill faces intense pressure from the tech lobby, arguing against a state-by-state regulatory approach, leaving lawmakers nationwide closely monitoring the outcome.
The national movement to rein in AI is part of a larger global effort. The U.S. and China are set to engage in their first high-level talks on AI, which experts believe could significantly impact the future of international commerce. The discussions aim to establish a foundation for managing AI technology and regulation, shaping policies, cooperation and safeguards against accidental mismanagement or deliberate weaponization that could devastate markets, industries and security systems.
The private sector is also pushing for change.
OpenAI CEO Sam Altman has expressed support for establishing an international agency to regulate artificial intelligence, citing concerns about the potential for “significant global harm” from advanced AI systems shortly.
Speaking on the “All-In” podcast on Friday (May 10), Altman emphasized the need for a balanced approach to regulation, cautioning against excessive and insufficient oversight. He believes that AI systems could soon have negative impacts that transcend national borders, necessitating an international regulatory body to ensure the safety of the most potent AI models.
“I think there will come a time in the not-so-distant future — like we’re not talking decades and decades from now — where frontier AI systems are capable of causing significant global harm,” Altman said on the podcast.
He said he believes AI systems will have a “negative impact way beyond the realm of one country” and wants to see them regulated by “an international agency looking at the most powerful systems and ensuring reasonable safety testing.”