Nvidia is scheduled to announce its fourth-quarter earnings following the close of the U.S. market on Wednesday (Feb. 21), and analysts expect AI chips to be vital to the company’s trajectory.
Investors will closely examine remarks from Nvidia CEO Jensen Huang for insights into the sustainability of the company’s substantial growth. In the last year, Nvidia’s stock soared three times higher, fueled by the high demand for its graphics processing units in the artificial intelligence (AI) boom. AI chatbot ChatGPT operates using thousands of Nvidia’s graphics processing units (GPUs).
“Nvidia’s first and primary advantage arises from software that has been highly optimized to run on its AI chips. AI chips need to perform many multiplications and additions,” Benjamin Lee, a professor who studies computer architecture at the University of Pennsylvania, said in an interview. “Many companies are capable of building chips for this fairly mature hardware capability. However, performing these calculations at high rates requires sophisticated software optimizations, which Nvidia has developed within its CUDA software for performing more general types of computation on its graphics processors.”
On Tuesday (Feb. 20), Nvidia’s shares dropped by 4.35%. Analysts anticipate Nvidia will report a major surge in revenue compared to the same period last year. This growth is primarily attributed to the $17.06 billion generated from data center revenue, mainly from selling AI GPUs like the H100.
The growth potential of Nvidia is closely linked to the extent of AI adoption, Joshua Pantony, CEO of Boosted.ai, said in an interview.
“The technology — though we certainly believe it is extremely transformative — is still in its infancy, but as people continue to discover the efficiency gains AI can offer, increased demand will fall on the computing power required to handle these tasks,” Pantony said. “There’s a reason everyone from Meta to the government of Canada is buying all the chips they can get their hands on — use cases and everyday use will continue to expand.”
Nvidia’s most significant advantage over its competitors is that its CUDA software libraries are well integrated with the most popular programming languages for machine learning, such as PyTorch, Lee said. The company’s secondary advantage lies in high-speed networks and system architectures that link multiple GPUs together, enabling swift and efficient coordination for handling large models that cannot run on a single GPU, thereby facilitating the construction of larger and faster AI supercomputers.
Lee said no other company in the industry, such as AMD’s graphics processors and Google’s tensor processing unit, has a similarly mature, optimized software ecosystem to CUDA. AMD has an open-source software stack (ROCm) so that machine learning can run more effectively on its chips, but this framework is much less prevalent than CUDA. Lee noted that software support is the most critical determinant of machine learning performance.
“AMD needs to develop tools and libraries that allow practitioners to quickly achieve high performance for a variety of machine learning computations,” he said. “Facilitating setup and providing mature documentation will also encourage adoption. However, it will face a steep climb because CUDA has developed a fairly significant mindshare.”
Most companies are currently focusing on acquiring enough GPUs for training large, advanced models, Lee noted. But, he said, in the future, as users integrate generative AI into their day-to-day workflows, many companies will turn to acquiring even more GPUs for serving trained models. In other words, models that have been trained will need to respond to a very large number of queries and prompts, which in turn will drive GPU demand, he added.
Nvidia’s competitors aren’t standing still. In the server space, Intel and AMD are making progress pushing out better GPUs to go after the estimated 2027 $400 billion AI hardware market, Olivier Blanchard, research director, Semiconductors Devices and EVs, The Futurum Group, said in an interview.
AMD’s MI300 accelerators, in particular, seem like the strongest competitive alternative to Nvidia’s supply-challenged H100 and could provide AMD with a strong on-ramp for growth in the AI market, Blanchard said. The company’s Instinct MI300X accelerators and AMD Instinct MI300A APUs are already powering cloud and enterprise AI infrastructure, with the MI330x reportedly being used by Microsoft, Meta, Oracle, Dell Technologies, Hewlett Packard Enterprise (HPE), Lenovo, Supermicro, Arista, Broadcom and Cisco.
“While Nvidia enjoys an enviably larger share of the market, AMD currently provides the most viable alternative to the H100 with the MI300, in my opinion,” Blanchard said. “This becomes all the more relevant for OEMs looking to de-risk their supply chains or simply addressing chronic supply constraints.”
Intel, for its part, recently announced its new AI-optimized Gaudi3 GPUs alongside its 5th-Gen Xeon Scalable CPUs for data centers.
“It’s a bit too soon to know how well Gaudi3 will perform against its Nvidia and AMD rivals, but the point is that the AI opportunity is large enough for all three chipmakers to find their own lanes so long as they can field competitive products at scale,” Blanchard said. “Nvidia is going to be difficult to displace in the GPU space, but that shouldn’t be the goal for Intel and AMD yet. Establishing a beachhead and pushing towards double-digit market share as fast as possible should be the focus for now.”
However, Pantony said that Nvidia has a first-mover advantage over its competitors.
“They have been operating in the AI space since they launched DGX in 2016, and along with the extensive support their CUDA API allows, which launched even earlier, back in 2006, NVDA is firmly cemented in people’s minds as the AI chipmaker,” he said. “That early edge also means they are the best. No one else comes close to the same speeds at training AI models that NVDA does.”