EU Proposes Restrictive New AI Regulations

Medical AI

When Microsoft spends $19.7 billion on a company whose specialties included voice recognition and artificial intelligence (AI) as part of its health sector strategy, you know that AI in the medical field is here to stay. It only makes sense, then, that regulations regarding the technology would not be far behind. Thanks to a leaked document first reported by Politico, we now have our first look at what such regulations might look like in the European Union.

The regulation document largely concerns “high-risk” usages of AI. That’s not surprising, as the European Commission originally published a whitepaper in February 2020 outlining ideas for regulating such uses of the technology. The current regulation also lays out rules regarding which applications of AI should be banned outright, such as in cases where a person is being blatantly manipulated.

“First, certain artificial intelligence-empowered practices have significant potential to manipulate natural persons, including through the automated adaptation of misleading user interfaces, and to exploit a person’s vulnerabilities and special circumstances,” states the regulation. “Manipulative artificial intelligence practices should be prohibited when they cause a person to behave, form an opinion or take a decision to their detriment that they would not have taken otherwise.”

The document also prohibits banning AI in cases when it is being used to surveil citizens. “The methods of surveillance could include monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources,” says the regulation. Interestingly, the document goes on to allow an exception in this case for such AI-powered surveillance to be carried out “by public authorities or on their behalf for the purpose of safeguarding public security and subject to appropriate safeguards of the rights and freedoms of third parties.”

A final ban is proposed for cases in which “algorithmic social scoring of natural persons should not be allowed if not carried out for a specific legitimate purpose of evaluation and classification, but in a generalized manner when the general-purpose score is based on persons’ behaviour in multiple contexts and/or personality characteristics and leads to detrimental treatment of persons, which is either not related to the contexts in which the data was originally generated or collected, or disproportionate to the gravity of the behaviour.”

This rule is aimed at preventing discrimination by AI, a problem that is already starting to emerge in uses in the medical field. “Detrimental treatment could occur, for instance, by taking decisions that can adversely affect and restrict the fundamental rights and freedoms of natural persons, including in the digital environment,” says the regulation.

High Risk

While “high-risk” is never quite defined in the text, the document does say that regulations are necessary to keep high-risk AI systems from posing “unacceptable risks to the protection of safety, fundamental rights or broader Union values and public interests.”

It also makes clear that it may not be an entire device or application using AI, but simply a portion of it that might be high-risk, such as a safety system inside machinery or toys. In terms of standalone systems, the regulation details several areas that it says should be considered high-risk. These include: systems used to dispatch emergency first-response services; those used to determine access to educational and vocational training institutions; those used to recruit, determine workloads and evaluate workers; those used to determine creditworthiness; and those used by authorities to evaluate applications for asylum or visas.

The 81-page proposal goes on to detail the intricacies of various dangers from AI, and just how those dangers should be mitigated. In some cases, the draft recommends self-reporting of high-risk applications; in others, it proposes a more robust, active response on the part of regulators. It also recommends that enforcement of the regulations, should they be adopted, be handled by EU member states. At the heart of it all, though, is an effort to help build trust between the technology and EU citizens.

“Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being,” states the proposal. “Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights.”

Read More On Artificial Intelligence: