Nvidia Acquires SchedMD to Manage AI Workload

Nvidia has acquired SchedMD and said it will continue to distribute that company’s open-source Slurm software.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Slurm, a workload management system for high-performance computing (HPC) and artificial intelligence (AI), is used in more than half of the top 10 and top 100 systems in the TOP500 list of supercomputers, Nvidia said in a Monday (Dec. 15) blog post.

    The two companies have been collaborating for over a decade, according to the post.

    With the acquisition, Nvidia will continue to invest in Slurm’s development “to ensure it remains the leading open-source scheduler for HPC and AI”; offer open-source software support, training and development for Slurm to SchedMD’s customers; and develop and distribute Slurm as open-source, vendor-neutral software, while making it available to the broader HPC and AI community, according to the post.

    “Nvidia will accelerate SchedMD’s access to new systems — allowing users of Nvidia’s accelerated computing platform to optimize workloads across their entire compute infrastructure — while also supporting a diverse hardware and software ecosystem, so customers can run heterogeneous clusters with the latest Slurm innovations,” Nvidia said in the post.

    SchedMD CEO Danny Auble said in the release that the acquisition demonstrates the importance of Slurm’s role in demanding HPC and AI environments.

    Advertisement: Scroll to Continue

    “Nvidia’s deep expertise and investment in accelerated computing will enhance the development of Slurm — which will continue to be open source — to meet the demands of the next generation of AI and supercomputing,” Auble said.

    In an earlier transaction, Nvidia said in April 2024 that it planned to acquire Run:ai, a Kubernetes-based workload management and orchestration software provider.

    That acquisition was finalized in January after being cleared by regulators.

    When announcing that it had entered into a definitive agreement to acquire Run:ai, Nvidia said the deal would help customers make more efficient use of their AI computing resources.

    “Run:ai enables enterprise customers to manage and optimize their compute infrastructure, whether on premises, in the cloud or in hybrid environments,” Nvidia said at the time in a blog post.

    Nvidia CEO Jensen Huang said in November that Nvidia is operating through “three massive platform shifts at once” as companies move from traditional computing to accelerated computing, from classical machine learning to generative AI, and now toward agentic systems that perform multistep tasks.