Governments around the world are racing to regulate artificial intelligence (AI) and its groundbreaking innovations.
But the technology’s hyper rapid pace of advancement — which some observers say is already outstripping Moore’s Law’s two-year standard — is complicating governments’ efforts to align globally on the laws that should govern the use and development of AI.
This, as a leaked analysis of the EU’s AI Act by the U.S. State Department warns that the legislation put forward by the 27-nation bloc might inadvertently favor large AI companies at the expense of smaller startups, as well as curb investment in AI technology due to its onerous compliance costs.
The document, which was obtained by Bloomberg News, has reportedly already been shared with EU lawmakers and includes feedback such as granular line edits to certain provisions of the EU Parliament’s AI Act.
Because training large language models (LLMs), as well as their next generation large multimodal model (LMM) counterparts, is so resource-intensive the U.S. analysis alleges that the EU’s risk-based framework, as written, is likely to hamper “investment in AI R&D and commercialization in the EU, limiting the competitiveness of European firms.”
In what is commonly known as the “Brussels Effect,” multinational enterprises frequently standardize their global operations to adhere to EU regulations, no matter where the business is taking place, meaning that the EU’s AI Act could have a global ripple effect across all major market economies.
While the EU was the first to draft a piece of legislation geared toward regulating AI, China was the first major economy to enact an AI policy, with a prescriptive set of guardrails going in to place this past August.
The advent of AI has put nations around the world in a unique position. The foundationally disruptive aspects of the technology hold the potential to reshape long-standing tenets of not just international relations but also of global interoperability and business growth.
As PYMNTS has written, because of the rate and speed at which AI technology’s capabilities are evolving, the present moment represents an increasing time of urgency for businesses, governments and both inter and intra-national institutions to understand and support the benefits of AI while at the same time working to mitigate its risks.
Formalizing a coherent approach to AI was at the center of this year’s Group of 20 (G-20) meeting in September, where leaders pledged to ensure “responsible AI development, deployment and use” that would safeguard rights, transparency, privacy and data protection; as well as agreed to seek a “pro-innovation regulatory/governance approach” that capitalizes on the benefits of AI while not losing sight of potential risk.
And now, the Group of Seven nations are preparing to ask AI companies to agree to 11 draft guidelines meant to mitigate AI risks as part of a plan aimed at uniting divided approaches to AI policy in the EU and U.S., per a Bloomberg report.
Speaking with PYMNTS last month, Cary Coglianese, the Edward B. Shils Professor of Law and professor of political science at the University of Pennsylvania Law School, said that regulating AI will be a multifaceted effort that varies according to the type of algorithm and its uses.
The EU is separately considering a plan to place additional, tiered constraints on the biggest artificial intelligence systems under its upcoming AI Act.
“If you make it difficult for models to be trained in the EU versus the U.S., well, where will the technology gravitate? Where it can grow the best — just like water, it will flow to wherever is most easily accessible,” Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS in June. “It’s an interesting Rorschach to figure out.”
That’s why there exists an opportunity for America to lead the way by supporting innovation while being smart and clear-eyed about the risks of AI technology.
Still, once any AI bills pass and become law, the nations responsible for them face a long journey to successfully implementing them.