
"The Department of Defense has sought to change the terms of a deal struck with Anthropic and other AI companies last July to eliminate restrictions on how AI can be deployed and instead permit "all lawful use" of the technology. Anthropic objected to the change, claiming that it could allow AI to be used to fully control lethal autonomous weapons or to conduct mass surveillance on US citizens."
"Anthropic was the first major AI lab to work with the US military, through a $200 million deal signed with the Pentagon last year. It created several custom models known as Claude Gov that have fewer restrictions than its regular ones. Google, OpenAI, and xAI signed similar deals around the same time but Anthropic is the only AI company currently working with classified systems."
"The Pentagon does not currently use AI in these ways, and has said it has no plans to do so. However, top Trump administration officials have voiced opposition to the idea of a civilian tech company dictating military use of such an important technology."
President Trump announced a directive for all federal agencies to cease using Anthropic's AI tools, escalating tensions between the company and the Department of Defense. The conflict stems from disagreements over military AI deployment terms. The Pentagon sought to modify a July agreement with Anthropic and other AI companies to permit "all lawful use" of AI technology, removing previous restrictions. Anthropic objected, expressing concerns that unrestricted use could enable fully autonomous lethal weapons or mass surveillance on US citizens. The Pentagon denies current or planned use of AI in these ways. Anthropic maintains a $200 million military contract and operates Claude Gov models with fewer restrictions than standard versions, currently used for report writing, document summarization, intelligence analysis, and military planning.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]