Billtrust Working Capital Tracker October 2023 Banner

EU’s AI Act: Premature or Prescient?

EU, AI Regulation, AI Act

Is it possible to achieve a perfect balance between regulation and innovation over disruptive technologies?

The European Union’s (EU) 27 member states appear to think so. The bloc unanimously endorsed the final text of its Artificial Intelligence (AI) Act on Friday (Feb. 2), moving past concerns by certain nations including France, Germany and Italy around the Act’s potential to hamstring innovation and set EU-based companies even further back in the AI arms race relative to their global peers. 

After all, when it comes to choosing an AI system, businesses and organizations around the world have a variety options — only, those options are mostly derived from AI systems developed by U.S. businesses.

The EU, for its part, tends to export only regulations. 

“All 27 Member States endorsed the political agreement reached in December — recognising the perfect balance found by the negotiators between innovation & safety,” Thierry Breton, the current Commissioner for Internal Market of the European Union, wrote in a post on X, formerly Twitter. 

“Historic, world first, pioneering…”

The #AIAct unleashed a lot of passion…and rightly so!

Today all 27 Member States endorsed the political agreement reached in December — recognising the perfect balance found by the negotiators between innovation & safety.

EU means AI!?? pic.twitter.com/dPNeOpLBnT

— Thierry Breton (@ThierryBreton) February 2, 2024

The Computer & Communications Industry Association (CCIA), a prominent tech lobbying group whose members include Google parent Alphabet, Amazon, Apple and Meta, struck a more cautious tone. 

“Despite efforts to improve the final text, after ‘victory’ was prematurely declared back in December, many of the new AI rules remain unclear and could slow down the development and roll-out of innovative AI applications in Europe,” CCIA Europe’s Senior Policy Manager Boniface de Champris said in a statement

“The Act’s proper implementation will therefore be crucial to ensuring that AI rules do not overburden companies in their quest to innovate and compete in a thriving, highly dynamic market,” added de Champris. 

Read more: How to Think About AI Regulation

Will the AI Act Preclude the Rise of EU-Based AI Champions?

AI systems have been projected to transform every industry, heralding a shift similar to — if not greater than — the impact that mobile and cloud computing had during the first two decades of the 21st century. 

The AI Act seeks to establish a global standard for AI technology, despite the innovation’s newness and relative lack of market history to work from. The act is expected to be formally adopted after policymaker committees’ approval on Feb. 13 and a European Parliament vote either in March or April. 

The act takes a risk-based approach to regulating AI applications and will apply to every AI company that provides services to the EU as well as users of AI systems located within the EU, but it does not apply to EU providers for external countries. 

France and Germany, which are home respectively to high-flying AI startups Mistral.ai and Aleph Alpha, had earlier expressed fears about the act’s impact on their domestic AI industries. 

“We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea,” said French President Emmanuel Macron, according to a report by the Financial Times, when the AI Act’s December 2023 text was announced. 

Echoing that sentiment, Avi Goldfarb, Rotman chair in AI and healthcare and a professor of marketing at the Rotman School of Management, University of Toronto, told PYMNTS in an interview also posted in December that, “[AI] is the most likely general-purpose technology to lead to massive productivity growth … The important thing to remember in all discussions around AI is that when we slow it down, we slow down the benefits of it, too.”

Per the act’s final text, providers of free and open-source models are exempted from most of its obligations, but the exemption does not cover obligations for providers of general-purpose AI models designated as having systemic risks.

Read more: Is the EU’s AI Act Historic or Prehistoric?

Unpacking the AI Act’s Finalized Text 

While the pending AI Act is likely to enter into force by this summer and be widely applied in 2026, parts of the act will be enacted earlier, and AI companies hoping to do business in the lucrative EU market will need to ensure that they are in compliance with its guidelines. 

“It’s an interesting Rorschach to figure out, you know, what is important to the EU versus what is important to the United States,” Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS in June. “If you look at all rules that come out of the EU, generally they tend to be very consumer privacy-oriented and less fixated on how this is going to be used in commerce.”

The EU’s rules, which were initially proposed back in 2021, establish four risk levels for foundation models and frontier AI systems, and apply tiered restrictions around their market deployment. The rules also require company-provided documentation about AI system training data content. Watermarking AI-generated media is also mandated for transparency.

The AI Act also bans outright several uses of AI, including social scoring based on social behavior or personal characteristics; biometric categorization systems using sensitive characteristics; untargeted scraping of facial images from the internet or CCTV footage; emotion recognition in the workplace and educational institutions; and other manipulative and deceptive use cases of AI technologies.

However, the AI Act does not apply to AI systems that have been developed exclusively for national security and defense purposes, and AI systems not posing significant risks to the health, safety or fundamental rights of natural persons will not be considered high-risk if they fulfill specific criteria.

One of many potential hiccups for companies is that Section 1(C) of the AI Act requires general-purpose AI model providers to put in place policies that “respect EU copyright law.”

This could make it challenging for EU startups to access the training data necessary to build AI systems able to compete with those already deployed across the U.S., but has been praised by the Federation of European Publishers

There is still more work ahead to ensure that the AI Act is implemented well. After all, regulation should be the guardian angel, not the grim reaper, of innovation’s potential — and the evidence will be in its enactment.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.