Gen. Dan Caine stated that autonomous weapons are going to be a 'key and essential part of everything we do' in future warfare, indicating a significant shift in military strategy.
The current collision between the Department of Defense and Anthropic over whether Anthropic's A.I. models should be bound by ethical constraints or made available for all uses the Pentagon might have in mind raises significant concerns about the future of AI governance.
In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek "red lines" to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk.
Hegseth summoned Amodei and demanded that Anthropic AI be used any way he wants or said he'd cancel the company's existing $200 million contract and blacklist them from any further AI pacts. Hegseth gave Anthropic until 5 p.m. yesterday to bend the knee. Amodel didn't bend.
Altman said the Pentagon agreed with his company's principles that OpenAI's technology would not be used for domestic mass surveillance or for autonomous weapon systems, affirming that humans would take responsibility for the use of force.