BlackRock Says AI Partnership Raises $12.5 Billion Toward $30 Billion Goal

BlackRock

BlackRock has raised $12.5 billion in its artificial intelligence partnership with Microsoft, Bloomberg reported Thursday (Jan. 15), citing commentary from the company’s fourth-quarter 2025 earnings call.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The investment management firm and the tech giant joined forces in 2024 to fund data centers behind the AI boom and are now closer to their $30 billion goal, the report said. The partnership also includes Nvidia, xAi and United Arab Emirates-affiliated MGX investment group.

    The effort “continues to attract significant capital,” BlackRock CEO Larry Fink told analysts during the call, per the report.

    The partnership is aiming to pull in $30 billion of private equity capital and then mobilize up to $100 billion in investment potential, including debt financing for infrastructure projects, PYMNTS reported in September 2024.

    “We are committed to ensuring AI helps advance innovation and drives growth across every sector of the economy,” Microsoft Chairman and CEO Satya Nadella said at the time. “The Global AI Infrastructure Investment Partnership will help us deliver on this vision, as we bring together financial and industry leaders to build the infrastructure of the future and power it in a sustainable way.”

    The group made a $40 billion deal in October to acquire Aligned Data Centers from Macquarie Asset Management, which called it the largest data center acquisition in history.

    Advertisement: Scroll to Continue

    Meanwhile, new research suggests that AI may no longer need massive data centers to scale.

    A study from Switzerland-based tech university EPFL found that while frontier model training is still computationally intensive, many operational AI systems can be deployed without requiring centralized hyperscale facilities.

    Instead, these systems can distribute workloads across existing machines, regional servers or edge environments, cutting back on the reliance on large, centralized clusters.

    “The research highlights a growing mismatch between AI infrastructure and real-world enterprise use cases,” PYMNTS reported Friday (Jan. 9). “These systems often rely on smaller models, repeated inference and localized data rather than continuous access to massive, centralized models.”

    Nvidia found that small language models could carry out 70% to 80% of enterprise tasks, leaving the most complex reasoning to large-scale systems, a structure that is becoming the most cost-effective way to operationalize AI.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.