A PYMNTS Company

Pentagon’s AI Push Faces Friction With Anthropic Over Usage Restrictions

 |  February 15, 2026

The Pentagon is considering ending its relationship with artificial intelligence company Anthropic after months of disagreements over how the U.S. military can use the firm’s AI systems, according to a report by Axios that was cited by Reuters. The potential move comes as defense officials push major AI developers to remove certain restrictions on their technology, sources told Axios.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The dispute centers on a demand from the Pentagon that would allow the U.S. military to use AI tools for “all lawful purposes,” including weapons development, intelligence work and battlefield missions, per Reuters. Anthropic has resisted these broader terms, maintaining limits on uses such as fully autonomous weapons and mass domestic surveillance, which Pentagon officials see as restrictive hurdles in defense applications.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    While other leading AI firms — including OpenAIGoogle, and xAI — have reportedly moved toward agreements that relax some usage limits in defense contexts, Anthropic’s stance has strained its relationship with the Defense Department. The ongoing negotiations reportedly have frustrated Pentagon officials after several months of talks, Axios said, according to Reuters.

    An Anthropic spokesperson told Reuters that the company has not discussed the deployment of its Claude model for specific military operations with the Pentagon. Instead, conversations with the U.S. government have focused on policy questions related to the company’s usage guidelines, particularly restrictions designed to prevent certain applications of AI technology. Those topics, the spokesperson said, did not involve present operations.

    Read more: Pentagon Pressed to Review SpaceX Over Alleged Chinese Investment Links

    The Wall Street Journal reported separately that Anthropic’s Claude model was used in a U.S. military operation targeting former Venezuelan President Nicolás Maduro earlier this year, with the technology accessed through a collaboration between Anthropic and data firm Palantir. Reuters later confirmed that the Pentagon is encouraging AI firms to make their systems available on classified networks with fewer of the usual restrictions, a shift that has amplified tensions with developers.

    The Pentagon did not immediately respond to a Reuters request for comment on potential changes to its relationship with Anthropic. Observers say the standoff highlights the broader challenge facing the U.S. military as it seeks to deepen its reliance on cutting-edge AI while balancing ethical concerns and commercial developers’ safety policies.

    Source: Reuters