Anthropic labelled a supply chain risk by Pentagon

Lily Jamali,North America Technology correspondentand
Kali Hays,Technology reporter
News imageReuters US Secretary of Defence Pete Hegseth standing in a blue suit and tie.Reuters

The US has officially deemed artificial intelligence (AI) firm Anthropic a supply chain risk — the first time the government has given that label to a US firm.

The Pentagon's designation is the latest escalation in a clash over Anthropic's refusal to give the government unfettered access to its AI tools over concerns they would be used for mass surveillance and autonomous weapons.

In a statement, a senior Pentagon official said the supply chain risk designation was "effective immediately."

The AI developer had been in talks with the Department of Defense in recent days, even after tensions between them spilled into public view last week, sources familiar with discussions have told the BBC.

Those talks did not prove fruitful, according to a person familiar with Anthropic who asked not to be identified, in part because of how President Donald Trump and other members of his administration had publicly berated the company.

Leadership at Anthropic had thought last week the two sides were near a resolution, after weeks of back and forth. Then Trump posted on his Truth Social platform that he was directing all federal agencies to stop using Anthropic, the person familiar added.

"We don't need it, we don't want it, and will not do business with them again!" Trump wrote in the Friday post.

Hegseth followed up with a post on X, writing that Anthropic would be "immediately" designated a supply chain risk, prohibiting any business working with the military from "any commercial activity with Anthropic".

Anthropic received no communication from the White House or the Pentagon that these statements were coming.

According to a person familiar with discussions, the feeling inside Anthropic is that it is disliked by some in the Trump administration as its chief executive has not been among the tech leaders to donate large sums to Trump or publicly praise him.

A Pentagon official said Thursday: "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes."

"The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."

'Shortsighted and self-destructive'

Anthropic had been used by the US government and military since 2024 and was the first advanced AI company to have its tools deployed in government agencies doing classified work.

However, as its relationship with the US military has soured, its rival OpenAI has stepped in.

Sam Altman, chief executive and co-founder of OpenAI, has said his new contract with the defence department has "more guardrails than any previous agreement for classified AI deployments, including Anthropic's".

Senator Kirsten Gillibrand said Thursday that designating Anthropic as a supply chain risk was "shortsighted, self-destructive, and a gift to our adversaries."

"The government openly attacking an American company for refusing to compromise its own safety measures is something we expect from China, not the United States," Gillibrand added.

Anthropic's AI app, Claude, remains popular despite the firm's public fallout with the US government.

Claude is the most downloaded AI app in several countries. Anthropic's chief product officer said on Thursday that "more than a million people" are signing up for Claude every day.

News imageA green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.