Enthusiasm leads to execution, which leads to results. That’s the equation that usually signals an integration of technology into an enterprise — and agentic AI is no different. The dynamics observed over the past week in The Prompt Economy show that companies are enthused and starting to execute, building systems to produce the results.
A new report from Harvard Business Review Analytic Services finds that enthusiasm for agentic AI is running well ahead of organizational readiness. Most executives expect agentic AI to transform their businesses, and many believe it will become standard across their industries. Early adopters are already seeing gains in productivity and decision-making. Yet for most organizations, real-world use remains limited. Only a minority are using agentic AI at scale, according to the report, and many struggle to translate high expectations into consistent business results.
The gap is not about belief in the technology but about preparation. The report shows that data foundations are improving, but governance, workforce skills and clear measures of success lag behind. Few organizations have defined what success looks like or how to manage risk when AI systems act with greater autonomy. Leaders that are making progress tend to focus on practical use cases, invest in workforce readiness, and tie agentic AI efforts directly to business strategy. The report concludes that agentic AI can deliver meaningful value, but only for organizations willing to rethink processes, invest in people, and put strong guardrails in place before scaling.
“The gap between expectation and reality remains wide,” the report reads. “Organizational readiness can help bridge the gap by giving implementation a better chance of succeeding.”
Singapore Standards
Governance can also be mandated. According to Computer Weekly, Singapore has introduced what it describes as the world’s first formal governance framework designed specifically for agentic AI. Announced by the country’s minister for digital development and information at the World Economic Forum in Davos, the framework is intended to help organizations deploy AI agents that can plan, decide and act with limited human input. Developed by the Infocomm Media Development Authority (IMDA), the framework builds on Singapore’s earlier AI governance efforts but shifts the focus from generative AI to systems that can take real-world actions, such as updating databases or processing payments. The goal is to balance productivity gains with safeguards against new operational and security risks.
The framework lays out practical steps for enterprises, including setting clear limits on how much autonomy AI agents have, defining when human approval is required and monitoring systems throughout their lifecycle. It also highlights risks such as unauthorized actions and automation bias, where people place too much trust in systems that have worked well in the past. Industry leaders welcomed the move, saying clear rules are needed as agentic AI begins to influence decisions with real-world consequences. IMDA has positioned the framework as a living document and is inviting feedback from companies as it continues to refine guidance for testing and oversight.
Advertisement: Scroll to Continue
Identity Factors
Another report warns that enterprises are racing ahead with agentic AI adoption while falling behind on governance and security. Executives from Accenture and Okta say most companies already use AI agents across everyday business tasks, but very few have put effective oversight in place. According to Okta, while more than nine in ten organizations are using AI agents, only a small fraction believe they have strong governance strategies. Accenture’s research points to the same imbalance, showing widespread use of AI agents without clear plans for managing the risks they introduce.
The core challenge, the report argues, is that AI agents are increasingly acting like digital employees without being managed as such. These agents need access to systems, data, and workflows to be useful, which creates new risks if their identities and permissions are not clearly defined. The authors recommend treating AI agents as formal digital identities, with clear rules around authentication, access, monitoring and lifecycle management. Without this structure, organizations risk creating unmanaged “identity sprawl” that could turn agentic AI from a productivity gain into a major security and compliance problem.
“Agents need their own identity,” the report says. “Once you accept that, everything else flows — access control, governance, auditing and compliance.”