Visa The Embedded Lending Opportunity April 2024 Banner

Is the EU’s AI Act Historic or Prehistoric?

The European Union’s Artificial Intelligence Act officially reached provisional status Friday (Dec. 8).

It took three days and 36 hours of political back-and-forth, but the 27-member nation bloc agreed on world-leading rules for policing the technical innovation that is generative AI.

Now comes the hard part: hashing out the specific details and scope of the binding laws, which will go into effect two years after being approved.

The European Commission, the European Council, and the European Parliament kicked off the first of over 11 scheduled meetings Tuesday (Dec. 12) to prepare the final wording of the bill, which still needs to be approved before officially entering into law.

Tech companies will have two years to implement the yet-to-be-agreed-on rules, while the AI Act’s ban on certain AI use cases, including facial and emotional recognition, will go into effect after six months, and the compliance requirements for firms developing “high-risk” foundation models will go live within one year.

There will be no penalties for companies that fail to comply with the AI Act’s rules in the interim, although the EU is urging AI firms to voluntarily comply.

“The AI Act is much more than a rulebook — it’s a launchpad for EU startups and researchers to lead the global AI race,” tweeted Thierry Breton, the current commissioner for Internal Market of the EU.

However certain EU member nations and their domestic startups disagree. While the EU’s AI Act is important due to its status as the first comprehensive and binding bill around AI, there exist concerns among industry and governmental leaders that the AI Act focuses too much on controlling for AI’s risks, and not enough on allowing for its rewards.

Read also: How AI Regulation Could Shape Three Digital Empires

Europe Could Have Prioritized Innovation

France and Germany, which are home respectively to high-flying AI startups Mistral.ai and Aleph Alpha, are in early discussions about analyzing and potentially disputing compromises within the AI Act, fearing its impact may end up hamstringing their domestic AI industries, Reuters reported.

“We can decide to regulate much faster and much stronger than our major competitors,” said French President Emmanuel Macron, per the Financial Times. “But we will regulate things that we will no longer produce or invent. This is never a good idea.”

For its part, German startup Aleph Alpha issued a statement “moderately” welcoming the political compromise on the AI Act.

One of the biggest challenges facing regulators when it comes to AI is fostering a basic understanding of how it works so they can oversee it productively yet not hinder its growth.

“Regulating foundation models is regulating research and development,” tweeted Yann LeCun, the chief AI scientist at Meta, speaking about the AI Act. “That is bad. There is absolutely no reason for it, except for highly speculative and improbable scenarios. Regulating products is fine. But regulation R&D is ridiculous.”

The EU, which has the world’s second-largest gross domestic product (GDP) and a population of over 100 million more than the United States, is behind both the U.S. and China when it comes to innovation, investment and incubation around AI. As of 2022, the EU and the United Kingdom had 8.9% as many granted AI patents as the U.S. and 3.7% as many as China.

The new requirements being baked into the AI Act will take a lot of resources for companies to comply with, in effect curtailing the resources that EU companies and smaller foreign startups looking to compete in the region could have spent on AI development and research.

“[AI] is the most likely general-purpose technology to lead to massive productivity growth,” Avi Goldfarb, Rotman chair in AI and healthcare and a professor of marketing at the Rotman School of Management, University of Toronto, told PYMNTS in an interview posted Monday (Dec. 11). “…The important thing to remember in all discussions around AI is that when we slow it down, we slow down the benefits of it, too.”

But just what are the rules and requirements that make up the EU’s AI Act?

See also: Companies With AI-Driven Strategies Outcompete Peers, Study Finds

The EU Will Become the World’s Premier AI Police

The AI Act contains binding rules on AI transparency and ethics, requiring foundation models and general-purpose AI systems to comply with legally binding rules around transparency regulations before being brought to market in the EU. These rules require AI firms to notify end-users when they are interacting with AI systems or content produced by them and require AI systems to be designed in such a way that AI-generated content and media can be detected and flagged, as well as require compliance with EU copyright law.

The EU has also set up a new global body to coordinate the enforcement of these rules, as well as allow end-users and EU citizens to lodge official complaints about AI systems.

The proposed fines for noncompliance with the AI Act range from 1.5% to 7% of an AI firm’s global sales and are pro-rated based on company size and offense severity.

The AI Act also bans outright several uses of AI, including biometric categorization systems using sensitive characteristics; untargeted scraping of facial images from the internet or CCTV footage; emotion recognition in the workplace and educational institutions; social scoring based on social behavior or personal characteristics; and other manipulative use cases of AI technology.

However, the AI Act does not apply to AI systems that have been developed exclusively for national security and defense purposes.

The AI Act also requires foundation models and frontier AI systems, categorized across four risk levels, to comply with increasingly stringent restrictions before they are put on the market, including releasing detailed summaries about their training data content.

Still, there are some loopholes. For example, computing power is one of the guidelines for categorizing an AI system’s risk, and only AI companies themselves know for certain just how much computing power was used to train their models, meaning it may be up to the companies themselves to assess what band of rules they fall under.

“The AI Act presents a very horizontal regulation, one that tries to tackle AI as a technology for all kinds of sectors, and then introduces what is often called a risk-based approach where it defines certain risk levels and adjusts the regulatory requirements depending on those levels,” Dr. Johann Laux told PYMNTS in August as part of “The Grey Matter” series presented by AI-ID.

As for what’s next? The bill’s text still needs to go through the final stages of compromises and revisions before it can be approved and officially entered into law.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.