Microsoft Backs Anthropic Against Pentagon Ban

Microsoft said Tuesday (March 10) that a judge should block the Pentagon’s designation of Anthropic as a supply chain risk, CNBC reported Tuesday.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The tech giant’s comments were delivered in its motion for a proposed amicus brief in Anthropic’s lawsuit against the Trump administration that was filed Monday (March 9), according to the report.

    Microsoft said in its filing that a restraining order halting the designation would enable a more orderly transition, avoid disrupting the military’s use of AI and prevent tech firms from having to immediately alter their product and contract configuration, per the report.

    A Microsoft spokesperson told Reuters: “We believe everyone involved shares common goals, and we need time and a process to find common ground.”

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    It was reported in January that Microsoft has become one of Anthropic’s top customers and that the tech giant is on track to spend around $500 million per year to use Anthropic’s AI in Microsoft products. In addition, Microsoft has increased its focus on selling its cloud customers Anthropic AI models, which could drive more revenue for both firms.

    In November, Microsoft said it formed a collaboration with Anthropic in which Anthropic will scale its Claude AI model on Microsoft Azure. Anthropic agreed to purchase $30 billion of Azure compute capacity, while Microsoft pledged to invest up to $5 billion in Anthropic.

    Advertisement: Scroll to Continue

    The White House told federal agencies Feb. 27 to stop using Anthropic’s AI products after the company refused a Pentagon demand that it agree that the military can use its models in “all lawful use cases.”

    Anthropic wanted contract language that would prohibit the use of its models for autonomous weapons and mass domestic surveillance, while Pentagon officials took the position that the military should retain final authority on use cases, as long as they are lawful.

    Defense Secretary Pete Hegseth said of Anthropic in a Feb. 27 post on X: “Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.”

    Anthropic sued the U.S. government Monday (March 9) to block it from placing a supply chain risk designation on the AI company. The label also means that anyone wishing to do business with the military would need to end their relationship with Anthropic.