A PYMNTS Company

Washington Moves to Set Rules for AI That Acts on Its Own  

 |  February 22, 2026

Artificial intelligence tools are no longer just answering questions. They’re booking meetings, writing code, managing files, and making decisions often without a human in the loop. Agentic AI is moving fast. Washington is now trying to catch up.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The federal government’s top technology standards body, the National Institute of Standards and Technology (NIST), announced this week that it is launching what it calls the AI Agent Standards Initiative. The goal, as CSO Online reported Thursday, is to create a roadmap for how agentic AI should be built, secured, and trusted and to make sure the United States leads the way globally.

    The initiative sits within a newly reorganized NIST unit called the Center for AI Standards and Innovation, or CAISI. The center replaced the Biden administration’s US AI Safety Institute last June. Its mandate has since shifted in a notably more competitive direction. According to NIST’s own press release, as cited by CSO, the center aims to foster American leadership in global standards bodies, promote open-source AI development, and advance research into how these tools can be made safer and more reliable.

    The stakes are real. Agentic AI tools are already embedded in corporate workflows, and the risks are mounting. CSO pointed to a 2025 security flaw called “EchoLeak,” in which Microsoft 365 Copilot was exploited to quietly extract sensitive data. It also flagged a tool called OpenClaw — formerly known as Moltbot and Clawdbot — described as a helpful assistant that also creates a hidden entry point for attackers to access a user’s applications and data.

    Read more: Lawmakers Seek GAO Review of State and Federal AI Regulations

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    Beyond outright security breaches, a November report from a major global tech trade group identified a subtler danger: what it called “jagged intelligence.” That’s the tendency of AI models to handle complex tasks with ease while stumbling on simple ones — an unpredictable pattern that could spell serious trouble when these tools are running on autopilot inside a company.

    NIST is asking the public and industry for input. It has issued a formal request for information seeking feedback on agentic AI threats, safeguards, and how to measure risk. The deadline is March 9. But not everyone is impressed by the timeline.

    Gary Phipps, head of customer success at agentic AI security startup Helmet Security, told CSO that NIST’s pace is fundamentally mismatched with how fast the technology is actually moving.

    “From the time NIST announced it was working on the AI Risk Management Framework to the day it published the final version was roughly two years,” Phipps said. “In that same window, the entire generative AI landscape was born, scaled, and began reshaping enterprise security. Now we’re doing it again with agentic AI, and NIST’s answer is more RFIs, more listening sessions, more convening.”

    He was equally blunt about NIST’s stated ambition to cement American dominance in the space: “Standards don’t create dominance: they follow it.”

    So, what comes next? According to CSO’s reporting, CAISI plans to hold a series of “listening sessions” in April focused on sector-specific obstacles to AI adoption. These sessions will come before any concrete guidance or rules are issued. NIST has framed interoperability — the ability of AI agents from different companies to work with one another — as a key priority. Without it, the agency warned, the AI agent market could end up fragmented and stall before it ever reaches its potential.

    For now, the regulatory picture remains thin on specifics. The agency has a framework in progress, a comment period open through early March, and a calendar of listening sessions scheduled for spring. Companies deploying agentic AI tools are largely on their own to manage the risks in the meantime.