A PYMNTS Company

Trump Administration Drafts Strict AI Contract Rules Amid Pentagon Dispute With Anthropic

 |  March 8, 2026

The Trump administration has drafted new rules governing artificial intelligence contracts with civilian agencies that would require companies to allow the U.S. government broad access to their technology, according to The Financial Times. The proposed guidelines come amid an escalating dispute between the Pentagon and AI developer Anthropic over how government agencies may use advanced AI systems.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Per The Financial Times, the draft guidance would require AI companies seeking federal contracts to grant the government an irrevocable license allowing officials to use their models for any lawful purpose. The policy would apply to civilian contracts managed by the General Services Administration (GSA) and forms part of a wider effort to strengthen how federal agencies procure AI services.

    The rules emerge at a moment of heightened tension between the U.S. government and Anthropic. According to The Financial Times, the Pentagon recently designated the company a “supply-chain risk,” effectively barring government contractors from using its technology for work tied to the U.S. military. The move followed a months-long disagreement over Anthropic’s attempts to impose safeguards limiting how its systems could be deployed.

    The conflict intensified after the Defense Department said it would terminate a $200 million contract with the company. According to The Financial Times, Anthropic declined to grant the Pentagon unrestricted authority to use its technology, citing concerns that powerful AI models could be deployed for domestic surveillance or integrated into lethal autonomous weapons systems. The company had sought contractual protections before allowing its tools to be used for what officials described as “all lawful use.”

    Defense Secretary Pete Hegseth criticized the company’s position, arguing its “true objective” was “to seize veto power over the operational decisions of the United States military.”

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    Related: US Expands Ban on Anthropic AI Across Key Agencies

    The dispute has had immediate consequences for the company’s relationship with the federal government. Josh Gruenbaum, commissioner of the Federal Acquisition Service within the GSA, confirmed that Anthropic had been removed from a government procurement arrangement. “It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic,” Gruenbaum said in an email to Reuters. “As directed by the President, GSA has terminated Anthropic’s OneGov deal – ending their availability to the Executive, Legislative, and Judicial branches through GSA’s pre-negotiated contracts.”

    The White House did not immediately respond to requests for comment from Reuters.

    The draft GSA guidance outlines several conditions companies must meet if they want to supply AI tools to civilian agencies. According to The Financial Times, contractors would be required to ensure their systems remain politically neutral. The draft states: “The contractor must not intentionally encode partisan or ideological judgments into the AI systems data outputs.”

    Additional language requires companies to disclose whether their models have been altered to comply with foreign regulatory regimes or commercial standards. Per The Financial Times, officials involved in the drafting process said this provision could partly address concerns about compliance with the European Union’s Digital Services Act.

    The guidance also calls for contractors to deliver “a neutral, non-partisan tool that does not manipulate responses in favour of ideological dogmas such as diversity, equity, inclusion.” According to The Financial Times, the provision reflects a broader policy push following an executive order from President Donald Trump aimed at curbing what his administration has labeled “woke” AI systems.

    Although the new rules are currently aimed at civilian contracts, The Financial Times reported that similar requirements are being considered by the Pentagon for military procurement.

    Source: The Financial Times