The artificial intelligence (AI) startup said Monday (March 2) morning that it was looking into what it called “elevated errors” on Claude. Anthropic posted later that the outage was 2 hours, 45 minutes long and that a “fix has been implemented and we are monitoring the results.”
At one point, the company reported it had seen a repeat of the issue and was investigating. Just before 2 p.m. EST, Anthropic said it had fixed the issue once again and was monitoring the results.
The company didn’t elaborate on the cause of the outage.
During the outage, it said on its status page: “We have identified that the Claude API is working as intended. The issues we are seeing are related to Claude.ai and with the login/logout paths.”
By 9:42 a.m. on the United States’ East Coast, the company said it had discovered the source of the issue and was implementing a fix.
Advertisement: Scroll to Continue
A report on the issue from the website Bleeping Computer characterized the outage as affecting users in multiple regions and on multiple platforms.
We’d love to be your preferred source for news.
Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!
The outage is happening as Anthropic is both involved in a dispute with the U.S. military, and reportedly assisting in combat operations against Iran.
The White House last week said that Anthropic would be cut off from its government contracts within six months, labeling the company a supply chain risk.
“I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it and will not do business with them again!” President Donald Trump said in a post on Truth Social.
The clash stems from a disagreement over how the military employs Claude, with Anthropic saying it could not allow the model to be used for mass surveillance operations within the U.S, or for autonomous weapons.
After Trump’s announcement Friday, Anthropic responded by saying it would challenge the government’s actions in court.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court,” the company wrote on its blog, adding that Defense Secretary Pete Hegseth had no authority to issue the supply chain risk designation.
“Legally, a supply chain risk designation … can only extend to the use of Claude as part of Department of War contracts — it cannot affect how contractors use Claude to serve other customers,” Anthropic added.
Meanwhile, reports from both Reuters and The Wall Street Journal said the military had used Anthropic’s technology, Claude included, in its attack. Sources told the Journal that U.S. Central Command uses Claude for intelligence assessments, identifying targets and to create simulations of battle scenarios.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.