The Trump administration on Friday ordered all US agencies to cease using Anthropic’s artificial intelligence and imposed additional penalties, escalating a public clash over AI safety and government access to AI tools.
President Donald Trump, Defense Secretary Pete Hegseth and other officials criticized Anthropic after CEO Dario Amodei refused to give the military unrestricted use of the company’s AI by a Friday deadline. Officials accused Anthropic of endangering national security; Amodei said the company could not abandon safeguards that prevent misuse. “We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote on social media.
Hegseth labeled Anthropic a “supply chain risk,” a designation usually applied to foreign adversaries and one that could harm the company’s business ties. Anthropic said such a designation would be unprecedented and legally questionable, and warned it would set a dangerous precedent for US firms negotiating with the government.
Anthropic sought narrow assurances that its chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it would use the technology only lawfully and not for such purposes but insisted on unrestricted access. The dispute underscores broader tensions about AI’s role in national security, particularly where lethal force, sensitive data or surveillance are involved.
Trump ordered most agencies to immediately stop using Anthropic’s AI while giving the Pentagon six months to phase it out where it is already embedded. He also warned of potential civil and criminal consequences if the company did not cooperate during the transition. Administration officials, meanwhile, said the Pentagon must have full, unrestricted access to Anthropic’s models for lawful defense purposes.
The public denunciations and the supply-chain designation prompted concerns from lawmakers and national security officials about whether the decisions were driven by careful analysis or political motives. Virginia Sen. Mark Warner warned that the rhetoric and actions raised serious questions about the basis for national security decisions.
The dispute rattled Silicon Valley. Many AI researchers, venture capitalists and employees from firms including OpenAI and Google voiced support for Amodei, arguing his conditions were reasonable. The move could benefit competing chatbots such as Elon Musk’s Grok, which the Pentagon plans to give access to classified networks. Musk publicly sided with the administration, while OpenAI CEO Sam Altman — a former colleague of Amodei’s — defended Anthropic’s stance and said OpenAI shared similar red lines.
Retired Air Force Gen. Jack Shanahan, a former Pentagon AI lead, warned that targeting Anthropic could be harmful and noted Claude is already widely used across government, including in classified settings. He described Anthropic’s red lines as reasonable and cautioned that large language models are not yet ready for many national security applications, particularly for autonomous weapons.
