Pentagon and Anthropic hit a deadlock over AI safeguards
The U.S. Pentagon has reportedly run into conflict with AI company Anthropic over how the government may use its technology. Sources say Anthropic wants stronger safeguards to stop its AI from being used for autonomous weapons targeting or for domestic surveillance inside the United States.
These talks have turned into an early test of how much influence major tech companies can have over the military’s use of advanced AI, especially as Silicon Valley and Washington rebuild ties after years of tension.
$200 million contract talks stall
After long discussions linked to a contract worth up to $200 million, the Defense Department and Anthropic have reached a standstill, according to people familiar with the matter.
The disagreement reportedly grew because Anthropic wants tighter limits on how its tools get deployed. Meanwhile, Pentagon officials argue they should be able to use commercial AI as long as they follow U.S. law, even if a company’s internal policies say otherwise.
Anthropic responded by saying its AI already supports national security missions and that it continues discussions with the Pentagon about continuing that work.
Main concern: weapons targeting and surveillance risks
During talks, Anthropic representatives reportedly warned that the government could use its AI in ways that raise serious ethical and legal concerns. These concerns include:
- using AI to help target weapons without strong human oversight
- using AI tools to spy on Americans through domestic surveillance
Because of these risks, Anthropic wants protections that ensure humans stay fully responsible for high risk decisions.
Why the Pentagon still needs Anthropic
Even though Pentagon officials want more flexibility, they may still need Anthropic’s support. Sources say Anthropic staff would likely need to modify or retrain parts of the AI models for military use.
In addition, Anthropic designs its models to avoid harmful behavior. That means the Pentagon cannot easily override safety behavior without cooperation from the company.
Anthropic’s broader position on defense AI
The dispute also comes at a sensitive time for Anthropic. The company is preparing for a future public offering. At the same time, it has invested heavily in building relationships in the national security space.
Anthropic also wants a role in shaping AI policy. However, its cautious approach has created friction with the Trump administration before.
Anthropic CEO Dario Amodei recently wrote that AI should support national defense, but not in ways that make the U.S. behave like authoritarian rivals. His comments reflect growing concern in parts of Silicon Valley about government use of AI for violence or harmful actions.
