When it comes to artificial intelligence (AI), Europe and Japan are reportedly of like minds.
“I see a lot of convergence in how we look at AI and generative AI,” European Commission Vice-President for Values and Transparency Vera Jourová said in an interview with Reuters published late Sunday (Oct. 8).
The report notes that the European Union (EU) and Japan have taken different paths in regulating AI. The EU’s AI Act offers tough rules for the industry, while Japan is considering more relaxed measures.
Still, Japan and the EU are increasingly cooperating on topics like AI, cybersecurity and chips viewed as necessary for economic security, Reuters added.
“I was recently in China and it’s a totally different thing. I could discuss with our Japanese partners because we do not have to explain to each other basic, basic things,” said Jourova.
As PYMNTS has written, Europe’s approach to regulating AI stands in contrast to the more hands-off strategy seen so far in America and Great Britain.
Companies such as Meta, Google and Microsoft have actively taken part in the lobbying around the drafting of the AI Act, with many executives arguing the regulations could “stifle innovation.”
“The AI Act presents a very horizontal regulation, one that tries to tackle AI as a technology for all kinds of sectors, and then introduces what is often called a risk-based approach where it defines certain risk levels and adjusts the regulatory requirements depending on those levels,” Dr. Johann Laux told PYMNTS as part of “The Grey Matter” series presented by AI-ID.
Laux argued that the European approach is based on a historical model for regulating products that stems from the industrial era, which “may or may not go well” in the digital age.
Jourová’s comments to Reuters come days after a separate interview with the Financial Times in which she said that paranoia in AI regulation could hinder innovation.
“There should not be paranoia in assessing the risks of AI,” said Jourová, one of two commissioners overseeing the launch of the EU’s AI Act. “It always has to be a solid analysis of the possible risks.”
She added: “We should not mark as high-risk things which do not seem to be high-risk at the moment. There should be a dynamic process where, when we see technologies being used in a risky way, we are able to add them to the list of high risk later on.”