2024 Global Digital Shopping index Banner

Face-Scanning Technology Becomes Focus of EU’s AI Act Negotiations 

biometrics face scan

European Union (EU) negotiators are reportedly engaged in discussions to establish the most comprehensive regulation of artificial intelligence (AI) in the western world. 

After a marathon session on the Artificial Intelligence Act that lasted nearly 24 hours, teams from the European Parliament and 27 member countries reconvened on Friday to address the regulation of AI technology, Bloomberg reported Friday (Dec. 8). 

While they reached an agreement on additional rules for general-purpose AI models, such as OpenAI’s ChatGPT, they remain divided on the use of live face-scanning technology by EU governments, according to the report. 

The debate surrounding the use of live face-scanning technology has been a sensitive and divisive topic, the report said. While the European Parliament previously voted for a complete ban on this technology, many EU countries have advocated for its use in law enforcement and national security efforts. 

Negotiators made substantial progress during the overnight session, but due to fatigue, the discussions were paused until Friday, per the report. Talks resumed then with the parliament presenting a list of demands regarding facial scanning to the council, which then made a counteroffer. 

The focus of the debate revolves around establishing rules for when law enforcement can scan faces in a crowd, such as to detect human trafficking or prevent terrorist attacks, according to the report. 

The proposed use of biometric data, including facial scanning, has faced criticism from external groups, the report said. Some argue against allowing predictive policing through this technology, labeling it “pseudo-scientific” and “disgustingly racist.” 

The EU, like the United States and the United Kingdom, has been grappling with finding a balance between nurturing its own AI startups and safeguarding against potential societal risks, according to the report. 

The EU policymakers have agreed to impose transparency requirements on developers of AI models like ChatGPT, the report said. Companies with models posing systemic risks will be required to sign a voluntary code of conduct to collaborate with the commission in mitigating these risks. This approach resembles the EU’s content moderation rules in the Digital Services Act. 

However, critics argue that these codes of conduct amount to self-regulation and may not be sufficient to ensure the safe development of AI technology, per the report. Some also express concerns that these rules may pose a burden for fostering EU leaders in the AI field, potentially giving non-EU providers a competitive advantage. 

The EU’s progress in developing AI regulation has given it a seat at the AI table, PYMNTS reported Wednesday (Dec. 6). The European Parliament passed the comprehensive Artificial Intelligence Act in June, and the EU Council, Parliament and the Commission are negotiating its final terms. 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.