OECD’s Principles Can Guide Governments to Design AI Regulatory Frameworks 

Artificial intelligence (AI) is a driving force of innovation due to its rapidly evolving technology, strong cross-domain connectivity, and the growing number of industry applications, Luis Aranda, AI policy analyst at the Organization for Economic Cooperation and Development (OECD), told PYMNTS in an interview.

Aranda is part of a team that produced the first set of principles on artificial intelligence. The OECD AI Principles are the first such principles signed up to by governments. They include concrete recommendations for public policy and strategy, and their general scope ensures they can be applied to AI developments around the world.

“They promote the use of AI that is innovative and trustworthy and respects human rights and democratic values,” Aranda told PYMNTS. “The goal is to become a standard that is practical and flexible enough to stand the test of time.”

Research began at the Paris-based think tank in 2016. At the time, OECD Secretary General Angel Gurría said while AI is revolutionizing the way we live and work and offers extraordinary benefits for societies and economies, it raises red flags and has fueled anxieties and ethical concerns.

As a result, Aranda said the onus is on governments to ensure that AI systems are designed in a way that respects a nation’s values and laws, so people can trust that their safety and privacy will be paramount.

Three years after the research team collected data and AI policies from around the world, a set of strategies were adopted. The principles comprise five values-based doctrines for the responsible deployment of AI and five recommendations for public policy and international cooperation.

Simply put, the concept is to guide governments, companies, and consumers in developing and operating AI systems that puts people’s best interests first and ensures that designers and operators are held accountable for their proper functioning.

Aranda acknowledged that the guide contains suggestions that can be encouraged but not enforced.

“We don’t have an OECD police in charge of enforcing their implementation,” he said. “What they really represent is just a common aspiration at adhering countries.”

Given the recommendations are not binding, Aranda said OCED is lending a hand to help countries implement them.

“We would like all the countries to adopt them because we believe that they provide a common denominator for a unifying foundation of principles,” he said.

Research and Policy

Aranda, who holds a master’s degree in applied math and a PhD in economics, likes evidence. OCED has gathered a database of national AI policies and strategies, he said.

AI research on the topic includes more than 700 policies from 60 countries and the European Union. The assembly unifies the AI policy environment into one entry point and advises the OECD on AI matters, he added.

“If anyone’s interested in comparing countries on AI policy, that’s where we think they should go,” he said.

In addition, OECD has gathered four working groups and a network of 250 AI experts from around the world.

In terms of the AI risk, Aranda said any threats to human rights, democratic values, the environment, privacy, fairness, transparency, safety and security, should be identified and mitigated.

“There are different ways to do this,” he said. “We’re developing … a catalog of tools for trustworthy AI and they aim to help AI actors ensure that their systems are trustworthy.”

While regulation is top down, OCED’s tools provide a bottom-up approach, he said.

“We need both approaches, we need top-down and bottom-up because it’s a whole society effort,” he said. “These tools provide a means for companies or AI actors to work on their systems and ensure that the principles are implemented.”

A Fine Balance

Still, the idea of transparency for any business is chilling if it means making source code public, providing access to data and auditing.

But Aranda said as for transparency, OCED is talking about something as simple as disclosing where AI is being used.

“Transparency can also mean enabling people to understand how AI systems are developed, trained, and how they operate,” he said. “It’s a fine balance that we need to reach.”

The next decade is expected to bring AI accountability. Aranda said there will be improvements in the way AI works, what the risks are and how they are mitigated.

Aranda also said he is convinced that this year will see an increase in global AI policy meetings as the use of the technology extends beyond determining what item a customer wants on the McDonald’s menu.

“We’re starting to see the intersection of AI trying to solve global challenges,” Aranda said.