The Pentagon Just Declared War on an AI Company for Refusing to Build Surveillance Tools

Photo by UK Prime Minister | License
In a move that has shocked the tech and defense worlds, the Trump administration has officially labeled AI company Anthropic a “supply chain risk,” effectively cutting off one of the fastest-growing AI companies from government contracts. The Pentagon announced the designation Thursday, following weeks of escalating tensions between the military and Anthropic CEO Dario Amodei over the company’s refusal to remove safeguards against mass surveillance and autonomous weapons.
Here’s what went down: Last Friday, Trump and Defense Secretary Pete Hegseth threatened Anthropic with serious consequences after Amodei refused to back down on ethical concerns about how the military might use the company’s Claude AI chatbot. Amodei said the company had concerns that Claude could be weaponized for mass surveillance of Americans or autonomous weapons systems. The Pentagon wanted complete freedom to use the technology however they saw fit, no questions asked.
Amodei isn’t backing down either. He said Thursday that Anthropic “does not believe this action is legally sound” and plans to challenge it in court. The Pentagon’s use of “supply chain risk” designation, a tool originally designed to protect the U.S. from infiltration by foreign adversaries like China and Russia, against a domestic American company is raising serious red flags.
Former CIA director Michael Hayden and retired military leaders from across all branches signed a letter expressing “serious concern” about the move, calling it “a profound departure from its intended purpose”. Even Neil Chilson, a Republican former chief technologist for the Federal Trade Commission, called it “massive overreach”. Senator Kirsten Gillibrand, a New York Democrat on the Senate Armed Services Committee, called the decision “reckless” and “a gift to our adversaries”.
So what does this actually mean? Well, military contractors like Lockheed Martin have already said they’re ditching Claude and looking for alternatives. Trump gave the military six months to phase Claude out of their systems, which is already embedded throughout military and national security platforms. The scope of how strictly this will be enforced remains unclear, Microsoft said its lawyers determined the company can continue working with Anthropic on non-defense projects.
The irony? While Anthropic loses major government contracts, the company has experienced a massive surge in consumer support. Over a million people signed up for Claude each day this week, pushing it past OpenAI’s ChatGPT and Google’s Gemini as the top AI app in more than 20 countries on Apple’s app store. People are literally voting with their downloads for an AI company that refused to compromise on ethical principles.
This clash also highlights the growing rivalry between Anthropic and OpenAI, which quickly announced a deal to replace Claude in classified military environments. But OpenAI’s CEO Sam Altman later admitted the deal “looked opportunistic and sloppy,” and the company had to amend its agreements anyway, proving that you can’t just ignore ethical concerns to please the Pentagon.
AUTHOR: mb
SOURCE: AP News


























































