The U.S. government has officially labeled leading AI firm Anthropic an “unacceptable risk” to national security, raising concerns that the company could disable or alter its technology against U.S. interests during wartime. As detailed by The New York Times, the designation effectively blocks the San Francisco-based company, known for its Claude chatbot, from working with federal agencies.
In a 40-page filing submitted Tuesday to the U.S. District Court for the Northern District of California, government lawyers questioned whether Anthropic could be considered a “trusted partner.” They argued that advanced AI systems are “acutely vulnerable to manipulation” and warned that giving the company access to Department of Defense systems would introduce unacceptable risk into military supply chains.
The filing marks the government’s first formal response to lawsuits Anthropic filed earlier this month. This escalates a dispute that could reshape how private AI firms interact with national defense programs.
The dispute centers on control over military AI use
Anthropic filed two lawsuits on March 9, one in California and another in the U.S. Court of Appeals for the District of Columbia Circuit, challenging Defense Secretary Pete Hegseth’s decision to apply the “supply chain risk” label. The designation has historically been reserved for foreign companies considered national security threats, making its use against a U.S.-based firm highly unusual.
The conflict stems from a reported $200 million contract negotiation over deploying Anthropic’s AI in classified systems. During talks, the company pushed back against allowing its technology to be used for mass surveillance or autonomous lethal weapons, amid Iran war headline pressure tied to broader Pentagon messaging.
Pentagon officials argued that private companies cannot dictate how the government uses technology tied to national defense. When negotiations stalled, Hegseth moved to formally classify Anthropic as a risk, triggering the current legal battle and raising broader concerns about government oversight of emerging technologies.
Anthropic has accused the Pentagon of acting on ideological grounds, warning that the designation could drive away more than 100 enterprise customers and result in billions in lost revenue. A hearing on Anthropic’s request for a preliminary injunction is scheduled for next Tuesday, where a federal judge will weigh whether to block the designation while the case proceeds.
Despite the dispute, the Pentagon has continued using Anthropic’s technology in a pilot program launched last year. Two defense officials confirmed the AI tools are still being used for intelligence analysis, as counterterrorism leadership fallout continues to reshape the wider national security debate.
The case has drawn support from across the tech and legal communities. The American Civil Liberties Union and the Center for Democracy and Technology filed a joint brief arguing that Anthropic’s refusal to support certain military uses is protected speech under the First Amendment.
Microsoft has also filed a friend-of-the-court brief backing Anthropic, urging the court to pause the designation. A group of 37 engineers and researchers from companies including OpenAI and Google, among them Google chief scientist Jeff Dean, submitted a separate filing in support of the AI firm.
Published: Mar 18, 2026 05:30 am