Global AI Regulation Could Come in 2022 With Broad Principles

Artificial intelligence (AI) won’t be easy to regulate and there will be a trial-and-error phase in most countries that attempt it. Still, some regulatory proposals with basic principles can be achieved in 2022 in the U.S. and Europe. 

Drexel University Chief Information Security Officer Pablo Molina told PYMNTS in a recent interview that despite being pessimistic on the current regulatory framework of privacy and AI in the U.S., there may be advances in 2022 that will result in some regulation at the federal level in a very basic way. 

 Self-regulation for Now 

According to Molina, as companies continue investing and developing AI without a legal framework, most big technology companies rely on self-regulation to comply with best practices. The lack of federal regulation for either privacy or AI isn’t necessarily bad for innovation, as companies have more freedom to explore and try new AI features without the constant pressure of respecting privacy rules. 

See also: FTC Mulls New Artificial Intelligence Regulation to Protect Consumers

Additionally, Molina said that “part of the reasons why the companies are doing this is precisely because they realize that if they succeed at self-regulation, which they never do in the long run, but at the very least they can push back any regulation and explore the business and technical limits of artificial intelligence first.”  

Europe regulates faster 

Despite not being at the forefront of AI developments, Europe has very strong privacy rules, which Molina said, allowed the bloc to be “a model for many other countries. Much of the new privacy regulations proposed elsewhere, including in states like California, are inspired by the general data protection regulation in Europe.

Read more: EU Proposes Restrictive New AI Regulations

Europe is also proposing new AI rules with the Artificial Intelligence Act, which seeks to harmonize the different legal frameworks across the 27 member states. While the proposed regulation is, for the moment, a set of principles, it already sets limits to what kind of artificial intelligence will face more regulatory scrutiny and what data is more likely to suffer bans and restrictions (i.e., biometric data). 

Top AI policy initiatives 

Molina highlighted four policy initiatives for AI that are worth watching. 

1) European union artificial intelligent act. Molina says, “We’ve be meeting with regulators, and representatives from the European Union. We know that they’re doing really great work on this front work that can be fundamental. That could be a good baseline for many other countries and many other efforts.” 

2) UNESCO. “They have recommended practices on artificial intelligence, how those are developed, how those are followed,” he said. 

3) OECD. “They’ve been working at this for a long time. Some people claim that the OECD may have a very unique agenda precisely for economic development, but certainly have done fantastic work in the past with privacy guidelines, and the artificial intelligence principles are going to be well followed by other legislators and regulators.” 

4) Council of Europe, artificial intelligence convention. “Very interesting framework to look at,” he said. 

Main Area of Concern (Transparency) 

One of the main areas of concern for regulators is how to balance the access to data that algorithms need to provide better results with the right of privacy by individuals.  

Molina said that to have fair and unbiased AI, we need transparency across all levels of the AI supply chain. This means transparency of the data sets that are used to produce results because if these datasets don’t have enough variety of data, the algorithm may render biased results. But transparency in the algorithm is also equally important, although Molina anticipated significant concerns in this area since these algorithms and datasets are, in many instances, commercial secrets. 

As these problems to access the data and the algorithms won’t be easy to solve, Molina suggested that regulators may take a less ambitious approach and start regulating the baseline. This would include abuses regarding facial recognition and other biometric data and then building the scaffolding for more sophisticated regulation. 

For the U.S., this means some generic regulations, basic principles, and specific applications that may have to do with the public sector.