A PYMNTS Company

EU and UK AI Hiring Laws Put US Employers at Legal Risk, Law Firm Warns 

 |  April 2, 2026

American companies that use artificial intelligence to screen job applicants are facing a fast-changing legal landscape in Europe. The rules on both sides of the English Channel are tightening, and the penalties for getting it wrong could be severe. US employers with operations in Europe can no longer treat AI-powered hiring software as a simple plug-in product. Understanding what the law now requires is not optional.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    That is the central warning from employment law firm Fisher Phillips, which has laid out what US companies need to know about new AI hiring rules in the European Union and the United Kingdom. The firm’s analysis comes as regulators in both jurisdictions have moved aggressively to address what they see as a growing risk: that automated hiring tools can discriminate against job seekers without anyone noticing.

    In the European Union, the rules are sweeping. Under the EU’s Artificial Intelligence Act, most AI tools used in hiring are classified as “high-risk.” That includes software that screens resumes, ranks candidates, or evaluates performance. The classification triggers a long list of requirements. Companies must document how their systems work, test them for bias, and ensure that a real human being is involved in final decisions, not just rubber-stamping whatever the algorithm recommends.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    Fisher Phillips flags a key 2023 European Court of Justice ruling, known as the SCHUFA decision, as a significant warning for employers. The court found that generating an automated score can itself count as an automated decision if others rely heavily on that score. For hiring, this means an AI-generated candidate ranking could be treated as a binding automated decision under EU law, even when a human manager technically makes the final call.

    Read more: AI in Litigation Series: An Update on AI Copyright Cases in 2026

    Across the Channel, the UK has chosen not to copy the EU’s framework. Instead, it is updating existing privacy law. The Data (Use and Access) Act 2025, which is being phased in now, amends the UK’s existing data protection rules rather than replacing them. It targets what the law calls “significant decisions” made solely by automated means, a category that clearly covers AI-driven hiring.

    The UK’s data regulator, the Information Commissioner’s Office, has already signaled it is watching closely. It has raised concerns about hiring tools that may put protected groups at a disadvantage and has said it expects companies to test for bias, explain their systems, and involve humans in decisions.

    “For US employers, this means AI-based recruiting tools cannot be treated as opaque vendor products and applied in all subsidiaries without adaptation,” states the Fisher Phillips post. “Your internal teams will need to understand how AI models were trained and how fairness is monitored over time as well as the applicable regulation and constraints in each jurisdiction.”

    Fisher Phillips outlines four concrete steps that US employers should take now. First, map every AI tool used in hiring and figure out which legal category it falls into. Second, require vendors to provide bias testing documentation and conduct your own testing on top of that. Third, make sure human reviewers can genuinely override AI outputs, not just approve them on autopilot. Fourth, comply with local notification and transparency requirements in each country, including rules in Germany, France, Spain, Italy, Austria, and the Netherlands that require employers to consult workers’ councils before rolling out AI-based HR tools.

    The pressure is only going to grow. The UK’s new data law is still being phased in, with more provisions set to take effect in the coming months. The EU AI Act’s obligations for high-risk systems are also rolling out in stages through 2026 and into 2027. Regulators have made clear that enforcement is coming. For US employers, the time to get ahead of these rules is now.