
"For years, Anthropic has distinguished itself from peers by embracing a safety-first stance. Its flagship model, Claude, was designed with guardrails that explicitly prohibit use in fully autonomous lethal weapons or domestic surveillance. Those restrictions have been central to the company's identity and its appeal to customers wary of unfettered AI."
"Defense Secretary Pete Hegseth has given Anthropic until Friday, 27th of February, to drop those limits for military users, arguing that the Department must have "unrestricted access to AI for all lawful purposes." Officials stress they are not seeking unlawful use, but in military operations, "lawful" is a broad canvas."
"The Pentagon has threatened to designate Anthropic a "supply chain risk," a step normally reserved for foreign adversaries whose technologies are seen as security threats. Such a label would effectively ban Anthropic tools from use across a broad swath of defense contractors and could isolate the company economically and strategically."
A conflict between the U.S. Department of Defense and Anthropic has escalated over AI governance and military use restrictions. Anthropic's Claude model includes safety guardrails prohibiting use in autonomous lethal weapons and domestic surveillance, distinguishing the company through its safety-first approach. Defense Secretary Pete Hegseth has issued an ultimatum requiring Anthropic to remove these restrictions for military users, claiming the Pentagon needs unrestricted AI access for lawful purposes. Anthropic CEO Dario Amodei refuses to comply, stating the company cannot remove safety protections in good conscience. The Pentagon threatens to designate Anthropic a supply chain risk, potentially banning its tools from defense contractors and costing the company up to $200 million in contracts.
#ai-governance #military-ai-policy #ai-safety-guardrails #government-tech-relations #defense-contracting
Read at TNW | Opinion
Unable to calculate read time
Collection
[
|
...
]