A PYMNTS Company

AI Regulation Pushes New Demands Into Tech Contracting

 |  October 21, 2025

As artificial intelligence evolves at breakneck speed, governments are racing to catch up. In the absence of comprehensive global alignment, technology contracting has become one of the front lines where businesses must translate uncertain and fragmented AI regulation into practical compliance measures. Legal experts warn that companies must reassess their contracting practices to anticipate obligations emerging from the United States, and the European Union, according to a client insights advisory by attorneys at Mayer Brown.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Colorado’s Anti-Discrimination in AI Law is the country’s most comprehensive measure, imposing distinct obligations on developers and deployers of AI systems. It focuses on high-risk applications in sectors such as employment, education, finance, healthcare, and housing.

    Developers must demonstrate responsible system design, transparent documentation, and rigorous testing for accuracy, robustness, and discrimination. Deployers are expected to maintain human oversight, conduct AI impact assessments, and comply with transparency and data governance requirements

    California, Utah, and New Jersey require companies to disclose when users interact with AI or consume AI-generated content, as well as details about training data. New York and Illinois have enacted AI laws targeting employment discrimination, while Texas has prohibited certain AI practices such as manipulative behavioral design or the generation of explicit content. The laws’ focus on bias prevention and transparency are  increasingly being mirrored in contract language.

    In practice, contracts for high-risk AI use cases should allocate clear obligations between developers and deployers, per Mayer Brown. Developers may be required to execute responsible development practices and compliance with frameworks such as the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF) or similar International Standards Organization (ISO) specifications. Deployers, in turn, should commit to responsible use, risk mitigation, and consumer disclosure. Mutual indemnities and cooperation clauses can help manage liability if one party’s actions trigger regulatory exposure for the other.

    Related: Tensions Flare Between White House and Anthropic Over AI Regulation

    The European Union’s Artificial Intelligence Act is the most comprehensive international regulatory framework. The Act imposes tiered obligations based on risk categories ranging from “minimal” to “high.” High-risk systems face the strictest requirements, including conformity assessments, data quality obligations, and ongoing monitoring. General-purpose AI models also fall under special oversight due to their broad potential impact.

    Contracting under the EU AI Act demands careful delineation of roles, Mayer Brown warns. Providers must ensure their AI systems comply before market placement, while deployers must use them consistently with the law. Agreements should specify the intended risk classification and restrict unauthorized modification, which could change a deployer’s status to a “provider” and expand their liability. Where modifications are permitted, providers may require cooperation clauses or indemnities to ensure continued compliance.

    Across jurisdictions, businesses face converging themes: accountability for bias and discrimination, transparency in AI operations and outputs, and diligence in data and content governance. The result is a contracting environment where regulatory foresight is as critical as technological innovation. By embedding compliance obligations, warranties, and risk-sharing mechanisms into their agreements, companies can stay agile in an era where AI law continues to take shape—and where contracts may be the first, and most important, line of defense.