PYMNTS.com

#pymntsAI

OpenAI Plans to Offer AI Models’ Enhanced Capabilities to Cyberdefense Workers

Read This

Enterprise AI Shifts Focus to Inference as Production Deployments Scale

Read This

Disney Calls on Google to Stop Using Its Content in AI Tools

Read This

White House Signals AI Power Shift to Washington

Read This

OpenAI Plans to Offer AI Models’ Enhanced Capabilities to Cyberdefense Workers

OpenAI said it is adding more safeguards to its artificial intelligence (AI) models amid rapid advancements in all AI models.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    While the advancements in all AI models bring benefits for cyberdefense, they also bring dual-use risks, meaning they could be used for malicious purposes as well as defensive ones, the company said in a Wednesday (Dec. 10) blog post.

    Demonstrating the advancements of AI models, the post said that in capture-the-flag challenges, the assessed capabilities of OpenAI’s models improved from 27% on GPT-5 in August to 76% on GPT-5.1-Codex-Max in November.

    “We expect that upcoming AI models will continue on this trajectory; in preparation, we are planning and evaluating as though each new model could reach ‘High’ levels of cybersecurity capability, as measured by our Preparedness Framework,” OpenAI said in its post. “By this, we mean models that can either develop working zero-day remote exploits against well-defended systems, or meaningfully assist with complex, stealthy enterprise or industrial intrusion operations aimed at real-world effects.”

    To help defenders while hindering misuse, OpenAI is strengthening its models for defensive cybersecurity tasks and creating tools that help defenders audit code, patch vulnerabilities and perform other workflows, according to the post.

    The company is also training models to refuse harmful requests, maintaining system-wide monitoring to detect potentially malicious cyber activity, blocking unsafe activity, and working with red teaming organizations to evaluate and improve its safety measures, the post said.

    Advertisement: Scroll to Continue

    In addition, OpenAI is preparing to introduce a program in which it will provide users working on cyberdefense with access to enhanced capabilities in its models, testing an agentic security researcher called Aardvark, and establishing an advisory group called the Frontier Risk Council that will bring together security practitioners and OpenAI teams, per the post.

    “Taken together, this is ongoing work, and we expect to keep evolving these programs as we learn what most effectively advances real-world security,” OpenAI said in the post.

    PYMNTS reported in November that AI has become both a tool and a target when it comes to cybersecurity.

    The PYMNTS Intelligence report “From Spark to Strategy: How Product Leaders Are Using GenAI to Gain a Competitive Edge” found that 77% of chief product officers using generative AI for cybersecurity said it still requires human oversight.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

    Enterprise AI Shifts Focus to Inference as Production Deployments Scale

    Enterprise artificial intelligence is entering a new phase as companies that spent the past two years experimenting with large language models are now moving those systems into live environments. It’s causing a shift in investment and engineering resources toward inference infrastructure.

      Get the Full Story

      Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

      yesSubscribe to our daily newsletter, PYMNTS Today.

      By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

      Inference refers to the stage where a trained model processes new data and produces results. When a customer service chatbot answers a query or an AI system analyzes a financial document, that is inference at work. While training creates the model by processing vast datasets to learn patterns, inference applies that learned knowledge to perform specific tasks at scale. As enterprises deploy AI systems that manage thousands or millions of requests daily, inference becomes the dominant operational challenge and cost driver.

      This fall, PYMNTS looked at inference and why it now matters more than training for most enterprises. Training a large language model happens once or periodically. Inference happens continuously every time a user interacts with an AI system. A single model might manage millions of inference requests per month, each requiring computational resources, adding latency and incurring costs. For companies running artificial intelligence in customer-facing applications, inference performance directly affects user experience, system reliability and operational expenses.

      Infrastructure Follows Production Demands

      This operational reality is reshaping the enterprise AI infrastructure market. Baseten, a platform focused specifically on inference infrastructure, raised $150 million in Series C funding in January, bringing its total funding to $216 million, to tackle that issue.

      Baseten addresses core infrastructure challenges that emerge when companies move beyond experimentation. The platform manages model deployment, manages compute resources across different hardware types and optimizes performance for production workloads. It supports models from major providers including OpenAI, Anthropic and open-source alternatives, giving enterprises flexibility in model selection while maintaining consistent operational infrastructure.

      The company serves enterprises that need reliable, performant inference at scale. Customers include Fortune 500 companies running AI systems that process high volumes of requests with strict performance requirements.

      Advertisement: Scroll to Continue

      Input Preprocessing Becomes Critical Component

      Baseten recently acquired Parsed, a company that builds technology for structuring and preprocessing inputs before they reach AI models. This acquisition addresses a specific technical challenge in production inference systems. Raw inputs such as unstructured documents, images or complex data formats often need processing before a model can reliably interpret them. Parsed’s technology handles this preprocessing step, extracting relevant information and formatting it appropriately for model consumption.

      The Parsed acquisition strengthens Baseten’s inference infrastructure by improving reliability and efficiency. When inputs are properly structured before reaching a model, inference becomes more predictable. Models receive data in consistent formats, reducing errors and improving response quality. This preprocessing also affects performance and cost.

      For enterprises running production AI systems, input quality and consistency matter significantly. A customer service system processing thousands of queries per hour needs reliable inference across varied input types. A financial analysis tool processing regulatory documents needs consistent extraction and structuring before model inference.

      As PYMNTS has reported, hyperscalers are also expanding aggressively into inference through custom chips and tightly integrated platforms. AWS promotes Inferentia, Google is pushing TPU v5e, and Microsoft is developing its Maia AI chips, pairing each with proprietary serving frameworks and cloud services. These strategies emphasize end-to-end control, bundling compute, storage and AI tooling into unified platforms designed to keep workloads inside a single cloud ecosystem.

      For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

      Disney Calls on Google to Stop Using Its Content in AI Tools

      Disney has reportedly sent a cease-and-desist letter to Google, alleging copyright infringement by the tech company via its artificial intelligence tools.

        Get the Full Story

        Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

        yesSubscribe to our daily newsletter, PYMNTS Today.

        By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

        In the letter, Disney alleges that Google has used the entertainment company’s content to train its AI models and distributed copies of its work to consumers, Ars Technica reported Thursday (Dec. 11).

        Disney calls on Google to stop using that content in its AI tools and to prevent those tools from generating images of Disney-owned characters, according to the report.

        Asked about Disney’s letter by Ars Technica, a Google spokesperson said, per the report: “We have a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them. More generally, we use public data from the open web to build our AI and have built innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

        This report came on the same day Disney announced a $1 billion investment in OpenAI and a three-year licensing agreement that will allow the artificial intelligence firm’s Sora video model to generate short fan-created clips using Disney-owned content.

        The agreement allows OpenAI to enable Sora users to generate short clips featuring characters from Disney, Pixar, Marvel and “Star Wars” within a structured environment that limits scenes to approved contexts. It prohibits the use of actor likenesses and restricts Sora prompts that introduce violence, politics or adult themes.

        Advertisement: Scroll to Continue

        PYMNTS reported Thursday that this is the first time a major studio has formally sanctioned a generative AI platform to use its copyrighted universe.

        Google has faced other legal challenges to the use of copyrighted content in its AI tools.

        Penske Media, the publisher of Rolling Stone, Billboard and Variety, filed a lawsuit against Google in September, accusing the company of using Penske’s journalism without permission to fuel AI-generated summaries that appear in search results.

        In July, the Independent Publishers Association filed an antitrust complaint with the European Commission, alleging that Google’s AI Overviews represent an abuse of the company’s market share in online search. The association alleged in its complaint that by positioning these AI-generated summaries, which use publishers’ material, at the top of its search results, Google disadvantages the publishers’ original content.

        For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

        White House Signals AI Power Shift to Washington

        The long-awaited Federal government campaign to quash state-by-state AI regulations is on. President Donald Trump signed an executive order on Dec. 11 directing the federal government to establish a new national approach to artificial intelligence and to push back against state-by-state AI rules the administration says are slowing U.S. innovation. For banks, payments firms and FinTechs leaning on AI for fraud detection, credit decisioning and customer-facing chatbots, the message is straightforward: the White House is seeking one federal playbook, not a patchwork of state requirements.

          Get the Full Story

          Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

          yesSubscribe to our daily newsletter, PYMNTS Today.

          By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

          In the order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” the White House frames AI leadership as a national-security and economic priority and points to Executive Order 14179 from Jan. 23, 2025, as the administration’s earlier step to remove barriers to AI adoption. The order argues that state AI laws create compliance burdens, can push developers toward “ideological bias” in models and can spill across borders in ways that affect interstate commerce.

          It cites Colorado’s “algorithmic discrimination” law as an example of a state measure that, in the administration’s view, could pressure models to change outputs to avoid disparate-impact outcomes. As the order puts it: “My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.”

          To execute that strategy, the order directs the attorney general to create an “AI Litigation Task Force” within 30 days with the stated mission of challenging state AI laws deemed inconsistent with the administration’s policy, including on constitutional and federal-preemption grounds. Within 90 days, the Commerce Department must publish an evaluation identifying “onerous” state AI laws, including those that require models to alter “truthful outputs” or compel disclosures that could violate constitutional protections.

          The order also links state policy choices to funding. Commerce is instructed to set conditions around remaining Broadband Equity Access and Deployment (BEAD) program funding, limiting certain non-deployment funds for states flagged for onerous AI laws, to the maximum extent allowed by federal law. Separately, it calls on the FCC to consider a federal reporting and disclosure standard that would preempt conflicting state laws, and directs the FTC to clarify when state mandates that alter truthful outputs could be treated as “deceptive” conduct under federal law.

          PYMNTS’ recent reporting has tracked the administration’s broader push for fewer constraints on AI development and for federal primacy. In May, PYMNTS covered a House effort to impose a 10-year moratorium on state AI regulation. In July, PYMNTS reported ahead of the White House’s “Winning the AI Race” rollout and the coming executive actions. Later that month, PYMNTS summarized Trump’s AI Action Plan, which emphasized deregulation, AI infrastructure buildout and “free speech” expectations for chatbots. PYMNTS also reported on the administration’s use of AI inside government to identify rules to cut, which is part of the same deregulatory throughline the new order reinforces.

          Advertisement: Scroll to Continue