Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Image by TechCrunch

Sam Altman’s bizarre claims about Pentagon control leave OpenAI employees stunned as the company faces a brutal backlash for its latest deal

Was the sellout worth it?

Sam Altman, the CEO of OpenAI, recently told his employees that the company actually doesn’t get to control how the Pentagon uses their artificial intelligence products in military operations, as reported by The Guardian. This news comes amid some pretty intense scrutiny over how the military is deploying AI in warfare and some serious ethical concerns from AI workers about how their tech will be used.

Recommended Videos

Altman’s claims about OpenAI’s lack of input are a big deal, especially after the company announced a new deal with the Pentagon. “You do not get to make operational decisions,” Altman reportedly told his team. He even gave an example, saying, “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that.”

The AI industry has been buzzing with heated discussions lately, particularly as the Pentagon has been pushing AI companies to remove safety guardrails from their models. This would allow for a much broader range of military applications, which, as you can imagine, raises a lot of eyebrows. We’ve even heard reports that AI-enabled systems have already been used by the US military in its operation to seize Venezuelan leader Nicolás Maduro and in making targeting decisions in its war against Iran.

Anthropic, which is a big rival to OpenAI and makes the Claude chatbot, actually turned down a deal with the Pentagon last week

Anthropic was worried their model could be used for domestic mass surveillance or even fully autonomous weapons. However, Defense Secretary Pete Hegseth wasn’t too happy about Anthropic’s decision. He declared the company a “supply-chain risk,” which is a designation that has never been used against a US company before.

Then, on the very same day that Hegseth made his threat against Anthropic, the Pentagon announced its new deal with OpenAI. It felt like a direct replacement for Anthropic’s Claude in military applications. The timing of this deal, combined with worries that OpenAI might have crossed some ethical lines that Anthropic refused to, sparked a major backlash. OpenAI faced criticism from both the public and its own employees.

Altman and OpenAI have since been trying to do some damage control, insisting that their technology will only be used legally. Altman even admitted that the deal felt a bit rushed, saying it made the company look “opportunistic and sloppy.” I can see why he’d say that, it definitely raised some questions.

Dario Amodei, Anthropic’s CEO, really went after Altman in a memo to his own employees. He reportedly called Altman “mendacious” and accused him of giving “dictator-style praise to President Trump.” Amodei wrote, “We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees.”

He went on to suggest that the Pentagon and President Trump’s administration don’t like Anthropic because they haven’t donated to President Trump, unlike OpenAI’s president, Greg Brockman, who, along with his wife, gave $25 million to a PAC supporting President Trump.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author