<2>Anthropic CEO Says AI Company ‘Cannot In Good Conscience Accede’ To Pentagon
<3>The Artificial Intelligence Company’s Stance
Anthropic CEO Dario Amodei announced that the company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology. The maker of the AI chatbot Claude stated that it’s not walking away from negotiations, but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
<3>The Pentagon’s Response
The Pentagon’s top spokesman, Sean Parnell, reiterated that the military wants to use Anthropic’s artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Parnell stated on social media that the Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
<3>Anthropic’s Policies
Anthropic’s policies prevent its models, such as its chatbot Claude, from being used for those purposes. It’s the last of its peers — the Pentagon also has contracts with
