EU Watchdog Mulls Regulation of AI-Cybersecurity Firms

EU, AI Act, artificial intelligence, regulations

In what’s been called the fourth industrial revolution, artificial intelligence (AI) is radically transforming global economies at a pace that has regulators scrambling to keep up.

In the European Union (EU), the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (“the AI Act”) is the most comprehensive piece of legislation to date. While the AI Act will have far-reaching implications for many sectors, the most affected will be those applications of AI that the EU deems high risk.

See also: EU Parliament Discusses AI Act With Agreement in Sight

As stated in the proposal, the Act “imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. For other, non-high-risk AI systems, only very limited transparency obligations are imposed.”

Of primary concern to legislators when it comes to high-risk systems is the necessity of robust security measures and the increasing threat that cyber attacks pose to the EU from both an economic and military-intelligence standpoint.

As is so often the case with AI, for financial services, the technology is both the problem and the solution. As fraudsters and hackers deploy an ever more sophisticated array of AI-powered trojans, ransomware and DDOS attacks, equally sophisticated cybersecurity and fraud prevention tools also use the technology to protect consumers.

Read more: AI Could Help FIs Fight Crime, Avoid Regulators 

Much of the deployment of AI in the financial services sector happens in the background, where it is mobilized by anti-money laundering and anti-fraud departments at banks and other financial institutions to monitor and block suspicious transactions. But at the front end of Europe’s payment systems, AI is also changing the way consumers verify their identity.

Related: 5 EU Startups Making Waves in the AML Technology Space

Thanks to biometrics and behavioral analytics, banks and payment service providers are increasingly able to authenticate users without the need for passwords, SMS messages or card-based verification methods.

In a recent PYMNTS report, Micheal Sheehy, chief compliance officer at Payoneer, discussed the importance of biometrics in combating money laundering and identity-based fraud.

He said that due to data breaches, “We can expect that most individuals’ traditional, personal identification information can be obtained somewhere on the dark web.” As a result of this, “Biometric information […] becomes the best way to ensure that the person you’re dealing with is that person.”

Read the report: Cross-Border Commerce Futures: How AI And Biometrics Are Transforming Global Risk Management

Significant for the EU’s financial institutions is that the application of AI in biometric identification is classified as high risk and will therefore be subject to the AI Act’s enhanced reporting and transparency obligations.

Although much of the Act’s concern with biometrics is to do with facial recognition in public places, in a short passage (Article 80) addressing the use of AI by financial institutions, the proposal essentially leaves it to the European Central Bank to determine how best to interpret the relevant laws and regulations where they overlap.

Learn more: How Face ID Can Power End-To-End Verification

Connecting the Dots

The growing importance of cybersecurity for the EU’s defense and stability means that the AI Act emerges as part of a regulatory architecture that includes the Data Act, the second Network and Information Security Directive (NIS2), the Digital Services Package and the Cyber Resilience Act.

See also: EU Cyber Resilience Act May Set New Global Standards

Together, the above legislations, which are at various stages of negotiation and ratification, are intended to streamline and clarify the EU’s approach to digital technologies including AI. But in solving some of the current challenges the bloc faces, the emerging regulatory framework also posits some new ones.

A recent report by Brussels-based thinktank Carnegie Europe on the “AI-Cybersecurity Nexus” argues that in order to strengthen its overall security on all fronts, the EU needs to further integrate the various laws currently being rolled out and the different agencies responsible for enforcing and implementing cybersecurity standards.

As the report states, “The EU is pursuing the twin goals of establishing a robust cybersecurity architecture across the bloc and harnessing the benefits of AI for broader societal and economic (cyber) security and defense purposes. Yet, if the goal is to ensure the cybersecure rollout of AI systems and services […] connecting the dots between various initiatives, processes, and stakeholders is paramount.”

For all PYMNTS EMEA coverage, subscribe to the daily EMEA Newsletter.