Nvidia’s Blowout Earnings Cement AI, Accelerated Computing as Foundational Shifts  

The recent artificial intelligence (AI) boom is no game for $1.67 trillion chipmaker Nvidia. 

But just five years ago, the company — which invented GPUs — was focused on graphics cards designed to make video games and other applications run faster. 

Now, Nvidia is one of the top five most valuable public companies in the world by market capitalization, and at the center of a generative AI revolution that quite literally cannot run without its products.

More than $1.2 trillion of that market capitalization has been added to Nvidia’s valuation in the past 12 months.

On Wednesday’s (Feb. 21) fourth quarter and fiscal year 2024 earnings call, the company revealed that its revenue grew 265% compared to Q4 2023, and was up 22% from its most recent quarter. 

The financial results, which managed to beat Wall Street’s already sky-high expectations, are as vivid a sign as any that the transformative power of AI is here to stay. 

“Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries, and nations,” said Jensen Huang, founder and CEO of Nvidia. 

“We have the speed, scale, and reach to help every company in every industry become an AI company. … The year ahead will bring major new product cycles with exceptional innovations to help propel our industry forward,” Huang added, noting that “every single species of AI,” going back to the breakthrough AlexNet neural network that kickstarted the modern AI era, has each been supported by — and built atop — Nvidia’s solutions. 

And those solutions, which include AI infrastructure through both the cloud and on-premise architecture, have been delivering for the company, its investors, and the tech ecosystem in a big way. 

Read moreInfrastructure Vendors Were the Market Winners in AI’s First Year

Nvidia Hardware Runs Tech Sector

Per the company’s CFO statement, Nvidia’s Data Center compute revenue was up 488% from a year ago and up 27% sequentially in the fourth quarter; it was up 244% in the fiscal year. Networking revenue was up 217% from a year ago and up 28% sequentially in the fourth quarter; as well as being up 133% in the fiscal year. 

The field of large language models (LLM) is thriving, and Nvidia did not play favorites when telling investors its list of AI clients. The company gave nearly the entire top half of the AI ecosystem as separate examples meant to highlight the blue-chip businesses relying on Nvidia products to underpin the advances they are driving across their sectors. 

“Our Data Center platform is powered by increasingly diverse drivers — demand for data processing, training and inference from large cloud-service providers and GPU-specialized ones, as well as from enterprise software and consumer internet companies. Vertical industries — led by auto, financial services and healthcare — are now at a multibillion-dollar level,” Huang said.  

He noted that Nvidia’s Hopper Infrastructure has emerged as the “de-facto standard” for accelerated computing. 

As detailed in the company’s results, 40% of revenue came from running the inference workflows needed to build, train and use AI. 

“For the very first time, a data center is not just about computing and storing and serving company employees. Now we have data centers that are AI generation factories that take raw material (data) and transforms it into incredibly valuable tokens for AI systems,” Huang explained. 

See also: Peeking Under the Hood of AI’s High-Octane Technical Needs

Computing Transformation, AI Revolution

PYMNTS reported in December that when AI startups raise money from investors, typically the first thing they do is turn around and pay that money to infrastructure vendors in order to run the necessary compute for them to train their AI models. Nvidia sits at the center of this computing and data infrastructure landscape. 

“Fundamentally, the conditions are excellent for continued growth. We’re at the beginning of two industrywide transitions. A transition from general to accelerated computing. General purpose computing is starting to run out of steam … so you have to accelerate everything, which allows you to dramatically improve your speed, costs, and efficiency,” Huang explained. 

“The speed is so incredible that we enabled a second industrywide transition: generative AI. GenAI is a new application and a new way of computing, and you cannot do it on general purpose computing,” Huang added.