Big Tech Deepens Its Control Across the AI Stack

The week’s developments from Microsoft, Nvidia, Amazon, Google and OpenAI show how major tech firms are expanding across every layer of the artificial intelligence stack, from infrastructure to applications.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Microsoft Builds In-House Image Model

    Microsoft’s MAI-Image-1 became the company’s first image-generation model built entirely in-house. Microsoft has long relied on external models such as OpenAI’s DALL·E for Copilot and Designer, but MAI-Image-1 brings that capability under its own roof. The model is already ranked among the top 10 performers on LMArena (an open platform for evaluating AI models) and was trained to generate visuals with greater accuracy, color balance and contextual understanding. By owning the model, Microsoft can tune performance for its software ecosystem and maintain tighter control over safety and content standards. The release places Microsoft alongside Google and Stability AI, both of which already develop proprietary visual systems.

    Nvidia Announces Networking and Data Center Updates

    Nvidia’s Spectrum-X Ethernet switches target a part of AI infrastructure that rarely makes headlines: the networks that connect thousands of processors inside data centers. When large models are trained, each graphics processing unit (GPU) handles a fraction of the workload and must constantly exchange results with others. Standard Ethernet hardware was built for conventional data traffic like file transfers, but AI requires millions of rapid, low-latency updates between chips. Spectrum-X is tuned for that pattern, reducing congestion so GPUs spend more time computing and less time waiting. Meta and Oracle plan to deploy the hardware to improve efficiency across their AI infrastructure, where even small gains in utilization can yield significant cost savings.

    Nvidia also introduced its Vera Rubin NVL144 architecture, a new blueprint for building AI data centers. Traditional facilities are assembled rack by rack, with separate systems for power, cooling and networking. Vera Rubin replaces that piecemeal approach with standardized modules that bundle all three functions into liquid-cooled, high-voltage units. Operators can expand capacity by adding pre-built modules rather than redesigning layouts from scratch, enabling faster deployment of what Nvidia calls “gigawatt-scale” AI factories. The architecture aims to make massive AI workloads more efficient and sustainable as demand accelerates.

    Agents and Consumer AI Gain Ground

    Amazon Web Services’ Bedrock AgentCore added another piece to this ecosystem. Bedrock already gives enterprises access to foundation AI models from multiple providers through a managed interface. AgentCore extends that platform by letting customers create agents, AI systems that can plan tasks, remember previous actions and interact with data or APIs autonomously. It introduces built-in memory, monitoring and governance so businesses can operationalize generative artificial intelligence without engineering new infrastructure for each use case. The release aligns with OpenAI’s AgentKit, which offers similar tools for building and standardizing agent workflows.

    Google’s Nano Banana update extended this shift further into the consumer layer. Built on the Gemini 2.5 Flash model, Nano Banana now brings image creation and editing directly into Search, NotebookLM and soon Photos. In Search, users can upload a picture and instantly generate alternate versions such as turning a photo of a living room into a redecorated space or a travel snapshot into a postcard. In NotebookLM, writers and researchers can create quick visual summaries of their notes or draft concept illustrations alongside text. The feature will also expand to Photos, allowing users to make context-aware edits within the app without switching tools. The rollout shows how generative functions are being folded into everyday experiences instead of existing as separate AI demonstrations.

    Advertisement: Scroll to Continue

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.