France, Italy and Germany Reach Accord on AI Regulations

Three European countries have reportedly agreed on how artificial intelligence (AI) should be regulated.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Germany, France and Italy have come to an accord on the regulations, Reuters reported Saturday (Nov. 18), saying it is a move that should speed negotiations on the matter.

    The report, citing a joint paper by the three nations, said their governments are in favor of voluntary-yet-binding commitments on AI providers in the European Union.

    “We have developed a proposal that can ensure a balance between both objectives in a technological and legal terrain that has not yet been defined,” Germany’s State Secretary for Economic Affairs Franziska Brantner told Reuters.

    As the report noted, Europe’s Parliament in June unveiled the “AI Act,” created to prevent potential risks resulting from AI while still benefiting from its uses. During the negotiations, lawmakers proposed a code of conduct that applied only to major AI providers, most of which are from the U.S.

    However, the governments of France, Germany and Italy have said this move — while potentially advantageous for smaller European companies — could end up lowering trust in those firms and costing them customers.

    Advertisement: Scroll to Continue

    Therefore, the paper called for conduct and transparency rules that are binding for everyone, with no sanctions imposed, at least not initially. Sanctions could be issued in the future if violations are identified after a certain amount of time, the report added.

    Speaking with PYMNTS earlier this month, AI-ID Founder and CEO Shaunt Sarkissian called the AI Act “a little myopic,” noting that it was “blurred” by Europe’s General Data Protection Regulation (GDPR).

    “It’s taken the lessons of GDPR and said, we’re going to limit informing the models,” Sarkissian said. “We’re going to put everybody in control of how data is used and shared, as opposed to the commercial effects of that data.”

    He added that he was encouraged by what he’d seen with the White House’s recent AI executive order, especially its focus on critical standards and safety guidelines, and its commitment to identifying AI-generated content.

    Sarkissian told PYMNTS’ Karen Webster that the key to balancing regulation and innovation in the AI sector is recognizing that while the technology is still evolving, use cases often fit within regulatory frameworks that are already in place.

    “Any idea that regulation is going to be globally ubiquitous is a fool’s errand,” Sarkissian said.