A PYMNTS Company

January 2026 Brings a New Phase of AI Rules Across the United States, Europe, and China

 |  February 3, 2026

As 2026 begins, governments in the United States, the European Union, and China are rolling out or refining policies that will reshape how artificial intelligence is developed and used, creating what many companies now see as a far more demanding global regulatory climate. According to a statement in recent policy briefings and media coverage, firms that rely on AI for decisions in areas such as lending, housing, healthcare, and employment are entering a period of heightened legal and operational risk.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In the United States, the most immediate pressure is coming from the states rather than Washington. Per statements from regulators and industry observers, lawmakers are focusing on what they call “high-risk” or “consequential” uses of AI, meaning systems that can significantly affect people’s lives. California is leading this effort through new rules tied to the California Consumer Privacy Act. Those rules require businesses using automated decision-making technology, or ADMT, to give consumers advance notice, allow them to opt out, and provide information about how those systems are used. Although enforcement does not start until January 1, 2027, companies are already being urged to prepare.

    Colorado is on a similar track. According to media reports, the Colorado AI Act is scheduled to take effect on June 30, 2026, and would require AI developers and deployers to take reasonable steps to prevent algorithmic discrimination, maintain formal risk-management programs, issue notices, and conduct impact assessments. However, per statements from lawmakers, the statute is expected to be debated during the current legislative session, meaning its final form could still change before it comes into force.

    State attorneys general are also becoming more aggressive. According to a statement from enforcement officials, scrutiny of AI-related practices increased sharply in 2025 and is expected to remain intense this year. In Pennsylvania, a settlement was announced last May with a property management company accused of using an AI system in ways that contributed to unsafe housing and delayed repairs. In Massachusetts, per statements from the attorney general’s office, a $2.5 million settlement was reached in July 2025 with a student loan company over claims that its AI-driven lending practices unfairly disadvantaged historically marginalized borrowers.

    Cybersecurity has emerged as another major front. According to a statement from U.S. regulators, AI-powered tools are now being used both by companies and by criminals, raising the stakes for data protection and operational resilience. The Securities and Exchange Commission’s Division of Examinations has said cybersecurity and operational resiliency, including AI-driven threats to data integrity and risks from third-party vendors, will be a priority in fiscal year 2026. Per statements from the SEC’s Investor Advisory Committee, companies may also face new expectations around how boards disclose their oversight of AI governance as part of managing material cyber risks.

    Across the Atlantic, the European Union is grappling with how to put its landmark AI Act into practice. The European Commission missed a February 2 deadline to release guidance on Article 6 of the law, which determines whether an AI system is considered “high-risk” and therefore subject to tougher compliance and documentation rules. According to a statement reported by Euractiv, the Commission is still integrating months of feedback and plans to release a new draft of the high-risk guidelines for further consultation by the end of January, with final adoption possibly in March or April.

    This uncertainty has fueled debate over whether parts of the AI Act should be delayed. Enforcers and companies have been warning that they are not ready to implement the most complex provisions, even though the law entered into force two years ago. That argument underpins the Commission’s proposed Digital Omnibus package on AI, which would narrow what counts as a high-risk use and delay those obligations by up to 16 months.

    During a January 26 hearing of the European Parliament’s civil liberties committee, European Commission Deputy Director-General Renate Nikolay explained why more time is needed, saying, “These standards are not ready, and that’s why we allowed ourselves in the AI omnibus to give us a bit more time to work on either guidelines or specification or standards, so that we can provide this legal certainty for the sector, for the innovators, so that we have the full system in place.” According to a statement from EU officials, high-risk compliance requirements are still formally due to begin in August, even as the debate over timing continues.

    In China, the focus is less on delays and more on balancing speed with control. In late January, President Xi Jinping addressed senior Communist Party officials and portrayed artificial intelligence as a transformative force on the scale of the steam engine, electricity, and the internet. According to state media, he warned that China must not let the technology “spiral out of control” and urged leaders to act early and decisively to prevent problems. Per statements from the same meeting, the government wants AI to drive economic growth while also preserving social stability and the party’s authority.

    That dual mandate is already shaping the private sector. Chinese AI companies are being pushed to innovate quickly while also complying with an expanding web of rules. When Zhipu AI, a fast-growing developer of large language models and the ChatGLM chatbot, filed for a Hong Kong listing in December, it cautioned investors about the heavy burden of meeting multiple AI-related regulations, according to a statement in its filing. The company was valued at more than $6 billion, underscoring how high the stakes have become.

    Taken together, the developments in January 2026 show how fragmented and demanding the global AI rulebook is becoming. In the United States, state-level laws and enforcement actions are setting the pace. In Europe, regulators are still negotiating how to apply a sweeping new framework. And in China, the government is trying to harness AI’s economic power without losing political control.

    Source: NY Times