Washington DC [US], February 27 (ANI): The US Department of War (DOW) sharply criticised Anthropic CEO Dario Amodei after the company declined to remove certain safeguards on its AI systems for military use.
US Under Secretary of War Emil Michael accused Amodei of trying to control military decision-making and called him a “liar” who has a “God complex.” “It’s a shame that Dario Amodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company,” Michael wrote on X.
Chief Pentagon spokesman Sean Parnell rejected suggestions the Department sought to use AI for unlawful purposes. “The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media,” Parnell wrote on X.
Parnell said the Pentagon’s request was limited to allowing it to use Anthropic’s AI model for “all lawful purposes,” calling that a “simple, common-sense request” to avoid disruptions to military operations. “We will not let ANY company dictate the terms regarding how we make operational decisions. They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW,” he added.
Anthropic said on Thursday it would not support certain uses of AI, including mass domestic surveillance and fully autonomous weapons, citing concerns about democratic values and the current reliability of frontier AI systems. The company said the DOW had pressured it to agree to “any lawful use” of its technology and to remove specific safeguards, but Anthropic would not change its stance.
“The Department of War has stated they will only contract with AI companies who accede to ‘any lawful use’ and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’–a label reserved for US adversaries, never before applied to an American company–and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” Anthropic’s statement said.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” the company added, saying it remains ready to support US national security efforts while keeping what it described as necessary guardrails on AI deployment. (ANI)
(This content is sourced from a syndicated feed and is published as received. The Tribune assumes no responsibility or liability for its accuracy, completeness, or content.)
