Corporate finance has long been among the early adopters of automation. From Lotus 1-2-3 to robotic process automation (RPA), the field has a history of embracing tools that reduce manual workload while maintaining strict governance.
Generative artificial intelligence (AI) has come to increasingly fit neatly into that lineage.
Findings from the PYMNTS Intelligence July 2025 PYMNTS Data Book, “The Two Faces of AI: Gen AI’s Triumph Meets Agentic AI’s Caution,” reveal that CFOs love generative AI. Nearly 9 in 10 report strong ROI from pilot deployments, and an overwhelming 98% say they’re comfortable using it to inform strategic planning.
Yet when the conversation shifts from copilots and dashboards to fully autonomous “agentic AI” systems, software that can act on instructions, make decisions, and execute workflows without human hand-holding, the enthusiasm from the finance function plummets. Just 15% of finance leaders are even considering deployment.
This trust gap is more than a cautious pause. It reveals a deeper tension in corporate DNA: between a legacy architecture designed to mitigate risk and a new generation of systems designed to act. Where generative AI has found traction in summarizing reports or accelerating analysis, agentic AI demands something CFOs are far less ready to give: permission to decide.
Why Agentic AI Feels Different
Generative AI won finance leaders over by making their lives easier without upending the rules. It accelerates analysis, drafts explanations, and surfaces hidden risks. It works inside existing processes and leaves final decisions to people.
Advertisement: Scroll to Continue
That made the ROI for generative AI obvious: faster closes, better forecasts and teams that can do more with less. It’s the kind of technology finance chiefs have embraced for decades.
Agentic AI is different. These systems don’t just suggest — they act. They can reconcile accounts, process transactions or file compliance reports automatically. That autonomy is exactly what the PYMNTS Intelligence report found rattles finance chiefs. Executives who love Gen AI when it writes reports or crunches scenarios can slam on the brakes when agentic machines start to move money or approve deals.
Governance is the first worry. Who signs off when a machine moves money? Visibility is another. Once an AI agent logs into a system over encrypted channels, security teams may have no idea what it’s really doing. And accountability is the big one: if an autonomous system makes a mistake in a tax filing, no regulator will accept “the software decided” as an excuse.
Read the report: The Two Faces of AI: Gen AI’s Triumph Meets Agentic AI’s Caution
The black-box nature of AI doesn’t help. Unlike traditional scripts or rules engines, agentic systems use probabilistic reasoning. They don’t always produce a clear audit trail. For executives whose careers depend on being able to explain every number, that’s a deal breaker.
Legacy infrastructure makes things worse. Finance data is scattered across enterprise software, procurement platforms, and banking portals. To work autonomously, AI would need seamless access to all of them, which means threading through a maze of authentication systems and siloed permissions.
Enterprises already struggle to manage those identities for employees. Extending them to machines that act like employees, only faster and harder to monitor, could be a recipe for hesitation.
If autonomous systems are going to move beyond experiments, they’ll need to prove their value in hard numbers. Finance chiefs want to see cycle times shrink, errors fall, and working capital improve. They want audits to be faster, not messier.
The irony is that CFOs don’t need AI to be flawless. They need it to be explainable. In other words, transparency is the killer feature.
Unless agentic AI can show that kind of return, it may stay parked in the “idea” column instead of the project pipeline.