A new report from OpenAI, “The state of enterprise AI 2025,” released Monday (Dec. 8), doesn’t just chronicle growth; it signals a structural pivot where generative AI is transitioning from a supplemental productivity tool to embedded, mission-critical infrastructure within the global financial and digital economy.
Organizations are moving beyond pilot programs to scaled, operational deployment, effectively codifying institutional knowledge into intelligent, persistent assistants.
The average enterprise worker now sends 30% more ChatGPT messages weekly than a year ago, while API reasoning token consumption per organization increased 320-fold, demonstrating both broader adoption and deeper integration across business functions. More than 7 million workplace seats now use ChatGPT Enterprise, up ninefold year-over-year.
Implementation with an Eye on ROI
The adoption curve varies sharply by readiness and resources. Large enterprises employing more than 1,000 workers prove twice as likely to build custom GPTs compared with smaller organizations, reflecting greater technical capacity to standardize AI-driven tasks, the report said. Custom GPT and Project usage rose 19-fold this year, with these tools now processing roughly 20% of all Enterprise messages as firms codify institutional knowledge into persistent assistants.
Return-on-investment metrics suggest the technology delivers measurable value. The share of CFO reporting very positive ROI from generative AI jumped from 27% to 85%, according to a PYMNTS Intelligence survey. Three quarters of those executives now deploy AI for cybersecurity management.
Advertisement: Scroll to Continue
According to Open AI’s report, enterprise workers attribute 40 to 60 minutes of daily time savings to AI use, with data science, engineering and communications roles reporting the highest gains at 60 to 80 minutes per active day. Survey data covering nearly 100 enterprises shows 87% of IT workers report faster issue resolution, 85% of marketing teams cite accelerated campaign execution and 73% of engineers describe shortened code delivery cycles.
Adoption Trends in Enterprise AI
OpenAI’s report said 70% of enterprise AI activity now takes place inside Projects, a configurable workspace that supports multi-step tasks with custom instructions and internal knowledge bases. The shift reflects deeper operational use, with some organizations running AI development at scale. BBVA, for example, maintains more than 4,000 active GPTs, a sign that AI-driven workflows are becoming embedded infrastructure rather than supplemental productivity tools.
PYMNTS data shows the same pattern on the demand side. Workflow optimization is the strongest area of agreement among product leaders, with 98% expecting generative AI to improve internal processes, up sharply from 70% one year earlier.
Technology companies lead API consumption, using the interface at rates 5 times higher year-over-year as they scale customer-facing applications including in-product assistants and search functions. Non-technology firms increased API use fivefold, suggesting adoption extends beyond product embedding toward operational deployments in customer service and content generation, which now represent approximately 20% of API activity.
International expansion accelerated over the past six months, with Australia, Brazil, the Netherlands and France posting customer growth exceeding 143% year-over-year. The United Kingdom and Germany rank among the largest ChatGPT Enterprise markets outside the United States by customer count, while Japan leads corporate API customers internationally.
That global expansion comes as the foundation-model market undergoes its sharpest shift in years. According to Menlo Ventures, Anthropic now earns 40% of enterprise large language model (LLM) spend, up from 24% last year and 12% in 2023, overtaking OpenAI as the enterprise leader. OpenAI’s share fell to 27%, down from 50% in 2023, while Google increased its enterprise share from 7% in 2023 to 21% in 2025. Together, these three providers now account for 88% of enterprise LLM API usage, with the remaining 12% spread across Meta’s Llama, Cohere, Mistral and a long tail of smaller models.
The shift in model share aligns with how enterprises now deploy AI more broadly today. 76% of AI use cases are purchased rather than built internally, according to data from Menlo. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value while enterprise tech stacks continue to mature.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.