This new collaboration is aimed at allowing CoreWeave to accelerate the development of more than 5 gigawatts of AI data centers by 2030, the companies announced Monday (Jan. 26). As part of the partnership, Nvidia has invested $2 billion in CoreWeave stock.
“From the very beginning, our collaboration has been guided by a simple conviction: AI succeeds when software, infrastructure and operations are designed together,” Michael Intrator, co-founder, chairman and CEO, CoreWeave, said in a news release. “NVIDIA is the leading and most requested computing platform at every phase of AI — from pre-training to post-training — and Blackwell provides the lowest cost architecture for inference. This expanded collaboration underscores the strength of demand we are seeing across our customer base and the broader market signals as AI systems move into large-scale production.”
Gigawatts are units of power often used for illustrating the capacity of an AI data center. A report on the partnership by CNBC points to data from the Energy Information Administration that 5 gigawatts would equal the yearly power consumption of 4 million American households.
“The thing to remember is we’ve invested $2 billion into CoreWeave, but recognize that the amount of funding that needs to be raised yet to support that five gigawatts is really quite significant,” Nvidia CEO Jensen Huang told CNBC. “We’re investing a small percentage of the amount that ultimately has to go and be provided.”
PYMNTS wrote recently about a growing body of research challenging industry assumptions about data center needs. This work argues that the infrastructure requirements of AI have been shaped more “by early architectural choices” and not “by unavoidable technical constraints.”
Advertisement: Scroll to Continue
Among the studies is a recent one from Switzerland-based tech university EPFL, which contends that while frontier model training is still computationally intensive, many operational AI systems can be employed without centralized hyperscale facilities.
“Instead, these systems can distribute workloads across existing machines, regional servers or edge environments, reducing dependency on large, centralized clusters,” PYMNTS said.
The research also illustrates a growing mismatch between AI infrastructure and real-world enterprise use cases, PYMNTS added.
These systems often depend on smaller models, repeated inference and localized data instead of ongoing access to massive, centralized models. As PYMNTS has reported, Nvidia has argued that small-language-models (SLMs) could carry out 70% to 80% of enterprise tasks, putting the most complex reasoning in the hands of large-scale systems.
“That two-tier structure, small for volume, large for complexity, is emerging as the most cost-effective way to operationalize AI,” the report added.