OpenAI says it shares Anthropic's 'red lines' over military AI use
Briefly

OpenAI says it shares Anthropic's 'red lines' over military AI use
"I don't personally think the Pentagon should be threatening DPA against these companies. He said he thinks it's important for companies to work with the military as long as it is going to comply with legal protections and the few red lines that we share with Anthropic and that other companies also independently agree with."
"For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters."
"The Department of Defense has given Anthropic a deadline of 5:01 p.m. ET today to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used for all lawful purposes."
The Department of Defense has issued Anthropic a deadline to remove restrictions on its Claude AI model regarding domestic mass surveillance and autonomous weapons use, threatening to cancel a $200 million contract and invoke the Defense Production Act if non-compliant. The Pentagon claims it has no intention of using AI for these purposes but requires unrestricted access for all lawful applications. OpenAI CEO Sam Altman publicly sided with Anthropic, stating the Pentagon should not threaten the Defense Production Act and supporting companies maintaining safety-focused red lines. Altman emphasized the importance of military-AI company collaboration within legal protections while praising Anthropic's commitment to safety and support for warfighters.
Read at www.npr.org
Unable to calculate read time
[
|
]