Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Image by RyanDonegan, CC BY 2.0

The Pentagon just labeled a top AI firm an “unacceptable risk,” but what sparked the fallout could reshape wartime tech

The U.S. government has officially labeled leading AI firm Anthropic an “unacceptable risk” to national security, raising concerns that the company could disable or alter its technology against U.S. interests during wartime. As detailed by The New York Times, the designation effectively blocks the San Francisco-based company, known for its Claude chatbot, from working with federal agencies.

Recommended Videos

In a 40-page filing submitted Tuesday to the U.S. District Court for the Northern District of California, government lawyers questioned whether Anthropic could be considered a “trusted partner.” They argued that advanced AI systems are “acutely vulnerable to manipulation” and warned that giving the company access to Department of Defense systems would introduce unacceptable risk into military supply chains.

The filing marks the government’s first formal response to lawsuits Anthropic filed earlier this month. This escalates a dispute that could reshape how private AI firms interact with national defense programs.

The dispute centers on control over military AI use

Anthropic filed two lawsuits on March 9, one in California and another in the U.S. Court of Appeals for the District of Columbia Circuit, challenging Defense Secretary Pete Hegseth’s decision to apply the “supply chain risk” label. The designation has historically been reserved for foreign companies considered national security threats, making its use against a U.S.-based firm highly unusual.

The conflict stems from a reported $200 million contract negotiation over deploying Anthropic’s AI in classified systems. During talks, the company pushed back against allowing its technology to be used for mass surveillance or autonomous lethal weapons, amid Iran war headline pressure tied to broader Pentagon messaging.

Pentagon officials argued that private companies cannot dictate how the government uses technology tied to national defense. When negotiations stalled, Hegseth moved to formally classify Anthropic as a risk, triggering the current legal battle and raising broader concerns about government oversight of emerging technologies.

Anthropic has accused the Pentagon of acting on ideological grounds, warning that the designation could drive away more than 100 enterprise customers and result in billions in lost revenue. A hearing on Anthropic’s request for a preliminary injunction is scheduled for next Tuesday, where a federal judge will weigh whether to block the designation while the case proceeds.

Despite the dispute, the Pentagon has continued using Anthropic’s technology in a pilot program launched last year. Two defense officials confirmed the AI tools are still being used for intelligence analysis, as counterterrorism leadership fallout continues to reshape the wider national security debate.

The case has drawn support from across the tech and legal communities. The American Civil Liberties Union and the Center for Democracy and Technology filed a joint brief arguing that Anthropic’s refusal to support certain military uses is protected speech under the First Amendment.

Microsoft has also filed a friend-of-the-court brief backing Anthropic, urging the court to pause the designation. A group of 37 engineers and researchers from companies including OpenAI and Google, among them Google chief scientist Jeff Dean, submitted a separate filing in support of the AI firm.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Saqib Soomro
Saqib Soomro
Politics & Culture Writer
Saqib Soomro is a writer covering politics, entertainment, and internet culture. He spends most of his time following trending stories, online discourse, and the moments that take over social media. He is an LLB student at the University of London. When he’s not writing, he’s usually gaming, watching anime, or digging through law cases.