Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Image by RyanDonegan, CC BY 2.0

The Pentagon called Anthropic a supply chain risk and banned it, but every other agency still wants its AI and here’s why

The White House and Anthropic are in active discussions about deploying the AI firm’s new model, Mythos Preview, across the federal government. As detailed by Axios, this comes despite the Pentagon having labeled Anthropic a “supply chain risk” and ordered its removal from military workflows. Anthropic is currently in litigation with the Pentagon over the designation, and while the company is barred from new Pentagon contracts, the rest of the federal government remains free to do business with it.

Recommended Videos

Mythos Preview is being rolled out to only a select group of companies and organizations rather than the general public, primarily so they can assess its advanced cyber capabilities and develop defenses against the kinds of threats it can expose. Agencies including the Departments of Energy and Treasury are among those pushing for access, citing the need to identify vulnerabilities in critical infrastructure such as the national electric grid and the financial system. The Office of Management and Budget has sent out an email, first reported by Bloomberg, confirming it is actively reviewing agencies’ ability to use Mythos.

The divide within the administration is stark. One official summarized it directly: “There’s progress with the White House. There’s not progress with [the Department of] War.” The same official noted that all intelligence agencies use Anthropic and that every agency except the Department of War wants to, with the friction rooted in Anthropic’s restrictions on how its technology can be deployed.

The Pentagon wants no limits, and Anthropic won’t budge

Anthropic’s position, laid out in a February 26, 2026 statement by CEO Dario Amodei, is that the company will not allow its models to be used for mass domestic surveillance or fully autonomous weapons. Amodei noted that Anthropic has proactively deployed its models to the Department of War and the intelligence community, and was the first frontier AI company to deploy models in the US government’s classified networks and at the National Laboratories. Its Claude model is already used across national security agencies for intelligence analysis, operational planning, and cyber operations.

On the question of mass domestic surveillance, Anthropic’s position is that current laws have not kept pace with AI’s ability to automatically assemble scattered data into comprehensive profiles of individuals at scale, posing risks to democratic freedoms. On fully autonomous weapons, Amodei stated that frontier AI systems are not yet reliable enough to power them responsibly, and that Anthropic has offered to collaborate on R&D to improve reliability, an offer that has not been accepted. Amid the broader debate over government use of AI technology, Anthropic has also passed up several hundred million dollars to cut off use of Claude by firms linked to the Chinese Communist Party and shut down CCP-sponsored cyberattacks that attempted to abuse the system.

The Pentagon has pushed back, arguing that Anthropic’s definitions are “nebulous” and that it needs assurances it can use AI systems for all lawful purposes. The Department of War has threatened to remove Anthropic from its systems if the company maintains its safeguards, and has threatened to invoke the Defense Production Act to force their removal. Amodei called out the contradiction directly, noting that one threat labels Anthropic a security risk while the other treats Claude as essential to national security.

A second administration official took a more skeptical view of Anthropic’s warnings about Mythos, accusing the company of using “fear tactics” to gain access to friendly ears across government agencies. That skepticism has not slowed the momentum. Despite the Pentagon ban, some Trump administration officials who initially viewed Anthropic’s leadership unfavorably have acknowledged the company’s tools as best-in-class for national security, with one Defense official stating the only reason talks were continuing was because “these guys are that good.” The split over administration policy disputes reflects a wider pattern of internal friction between executive agencies on consequential decisions. Sources indicate agencies could gain access to Mythos within weeks.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
More Stories To Read
Author
Image of Saqib Soomro
Saqib Soomro
Politics & Culture Writer
Saqib Soomro is a writer covering politics, entertainment, and internet culture. He spends most of his time following trending stories, online discourse, and the moments that take over social media. He is an LLB student at the University of London. When he’s not writing, he’s usually gaming, watching anime, or digging through law cases.