Thursday, April 9, 2026
spot_img

AI firm Anthropic sues Trump admin over ‘supply chain risk’ label

A leading artificial intelligence company has filed a high-stakes lawsuit against the Trump administration, alleging an unlawful retaliation campaign after it refused to grant the U.S. military unrestricted access to its AI model, Claude.

Anthropic, the developer of Claude, sued multiple government agencies and officials in a California federal court on Monday. The company is seeking to reverse the Department of Defense’s (DoD) decision to designate Anthropic as a “supply chain risk” and to overturn a directive from President Donald Trump ordering federal agencies to cease using its technology.

“These actions are unprecedented and unlawful,” Anthropic argued in its filing. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” A parallel suit was also filed in a Washington, D.C., appeals court challenging the DoD’s risk designation.

The Genesis of the Conflict: Refusing Unrestricted Military Use

The dispute centers on Anthropic’s long-standing ethical usage policies, which prohibit its technology from being used for lethal autonomous warfare and mass surveillance of Americans. According to the lawsuit, Defense Secretary Pete Hegseth demanded that Anthropic “discard its usage restrictions altogether” to facilitate broader military deployment.

When Anthropic maintained its guardrails—clauses that were always part of its government contracts—Hegseth moved to label the company a “supply chain risk.” This designation was finalized on March 3. It is the first time an American AI firm has received such a label, a status historically reserved for companies linked to foreign adversaries like China or Russia. The label effectively bans any entity doing business with the U.S. military from also working with Anthropic.

Anthropic stated it has never tested or validated Claude for the specific, high-stakes applications now demanded by the Pentagon. “Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare,” the company noted in its legal complaint, underscoring its position that the technology is not ready for such uses.

An excerpt from Anthropic’s suit claiming US President Donald Trump ordered federal agencies to stop using its tech after the government had agreed to its terms. Source: CourtListener

A Precedent with National Security and Innovation Implications

The lawsuit names Secretary Hegseth, the Treasury Department and Secretary Scott Bessent, the State Department and Secretary Marco Rubio, along with 17 other agencies and officials. It frames the government’s actions as a punitive response to the company’s refusal to alter its core safety principles—a form of retaliation for protected speech and contractual stance.

The case has drawn significant support from the broader AI research community. A legal brief filed in support of Anthropic on Monday was signed by over 30 AI engineers and scientists from competitors like OpenAI and Google, including Google’s chief scientist, Jeff Dean.

The group warned of far-reaching consequences: “If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” Their filing emphasizes the chilling effect such government action could have on responsible innovation and the ability of companies to set ethical boundaries.

The backdrop to this legal clash is the Pentagon’s increasing, albeit cautious, integration of commercial AI. The U.S. government has used Anthropic’s technology since 2024, and Claude was the first AI model deployed for classified work. Reports, such as one from The Wall Street Journal, have even indicated the military used Anthropic’s tech in an Iran strike operation despite the later Trump administration ban order, highlighting the complex and urgent demand for advanced AI in defense contexts.

Anthropic’s lawsuit thus sits at the intersection of national security policy, corporate ethics, and constitutional law. It challenges the executive branch’s authority to penalize a private company for adhering to its published safety policies and questions the proper use of the “supply chain risk” framework against a domestic innovator. The outcome will likely influence how the U.S. government engages with the AI industry and sets boundaries for acceptable military applications of generative AI.

Cointelegraph is committed to independent, transparent journalism. This news article is produced in accordance with Cointelegraph’s Editorial Policy and aims to provide accurate and timely information. Readers are encouraged to verify information independently.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_imgspot_img
spot_img

Hot Topics

Related Articles