Nvidia and AMD Reveal Dueling Paths for AI’s Future

Nvidia AMD

At the 2026 Consumer Electronics Show in Las Vegas, two of the world’s most influential chip executives offered sharply different roadmaps for artificial intelligence’s (AI) next phase.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Nvidia Founder and CEO Jensen Huang framed AI as an industrial platform that operates in the physical world. AMD Chair and CEO Lisa Su focused on the scale of compute the next decade of AI will demand, arguing that flexibility across data centers, PCs, and edge systems will determine who can keep up.

    Together, the keynotes offered a snapshot of how leading silicon companies are positioning themselves at the center of what both executives described as a long-running industrial shift driven by AI.

    Physical AI and the Rise of AI Factories

    Huang used his keynote on Monday (Jan. 5) to argue that AI has moved beyond software models running in data centers and into a new phase where systems must perceive, reason and act in the physical world. He framed that shift as a structural change for the industry rather than an incremental improvement.

    “The ChatGPT moment for physical AI is here,” Huang said, describing a point at which machines begin to understand real-world environments and operate within them.

    Huang leaned heavily on robotics, autonomous vehicles and simulation to make the case. He highlighted Nvidia’s autonomous driving software and robotics platforms, arguing that AI systems trained in simulation can now generalize to complex, real-world scenarios. He said those systems can learn from human demonstrations, anticipate edge cases, and explain their decisions, capabilities he positioned as prerequisites for large-scale deployment.

    Advertisement: Scroll to Continue

    Read more: Big Tech Kicks off 2026 With AI Product Updates and Releases

    Throughout the speech, Huang described Nvidia not simply as a chip supplier but as a builder of full AI “factories.” He positioned the company’s GPUs, networking, software frameworks and developer tools as tightly integrated systems designed to produce intelligence at industrial scale. In Huang’s framing, enterprises will install complete AI production stacks rather than assembling infrastructure component by component.

    He also emphasized digital twins as a core technology. By creating simulated replicas of factories, vehicles and infrastructure, Huang argued that companies can train AI systems faster and deploy them more safely. That approach, he said, allows AI to move closer to the edge, where it can interact directly with physical environments such as roads, warehouses and manufacturing floors.

    Push for Compute at Unprecedented Scale

    Su took a more infrastructure-centric approach for her speech on Monday (Jan. 5). Her keynote focused on the rapid growth in AI workloads and the scale of computing power required to sustain that trajectory.

    “How many of you know what a yottaflop is?” Su asked, before explaining that future AI systems could require levels of compute far beyond today’s supercomputers.

    A yottaflop refers to one septillion floating-point operations per second, or 10²⁴ calculations per second, a unit that sits several orders of magnitude beyond the exaflop-scale systems that currently define the high end of supercomputing. Su used the term to describe the aggregate compute capacity AI could demand over time as models grow larger, inference becomes continuous, and workloads spread across cloud data centers, PCs and embedded devices.

    Su presented AMD’s CPUs, GPUs and adaptive silicon as modular infrastructure that customers can tune across data centers, PCs and embedded systems. She highlighted upcoming accelerators alongside processors designed for local AI workloads, positioning AMD as a supplier of flexible building blocks rather than vertically integrated systems.

    She also addressed energy constraints directly, warning that AI’s expansion will stress power grids and data center capacity. Su said advances in performance per watt will determine how quickly the industry can scale, particularly as more AI workloads move closer to users and devices.

    Different Routes Toward the Edge

    Despite their contrasting emphases, both executives converged on a shared argument: AI’s next growth phase depends on pushing intelligence closer to where data is generated.

    At CES 2026, Huang and Su made clear that AI’s future will not hinge on a single model or breakthrough. It will depend on how effectively the industry builds and distributes the infrastructure that turns compute into deployed intelligence.