
"Altman said the government is willing to let OpenAI build their own "safety stack"-that is, the layered system of technical, policy, and human controls that sit between a powerful AI model and real-world use-and that if the model refuses to do a task, then the government would not force OpenAI to make it do that task."
"OpenAI would retain control over how technical safeguards are implemented, which models are deployed and where, and would limit deployment to cloud environments rather than "edge systems." In what would be a major concession, Altman told employees that the government said it is willing to include OpenAI's named "red lines" in the contract, including not using AI to power autonomous weapons, no domestic mass surveillance and no critical decision-making."
"The meeting came at the end of a week where a conflict between Secretary of War Pete Hegseth and OpenAI rival Anthropic burst into public acrimony, ending with the apparent end of Anthropic's contracts with the Pentagon and with the federal government in general."
Sam Altman announced to OpenAI employees that a potential contract with the U.S. Department of War is emerging for AI model deployment. The agreement would allow OpenAI to maintain its own safety controls and refuse tasks if necessary without government override. OpenAI retains authority over technical safeguards, model selection, and deployment locations, limiting use to cloud environments rather than military edge systems like aircraft and drones. The contract would include OpenAI's red lines prohibiting autonomous weapons, domestic mass surveillance, and critical decision-making automation. This development follows a public conflict between the Department of War and Anthropic, resulting in the apparent termination of Anthropic's Pentagon contracts.
#ai-governance #military-ai-deployment #safety-controls #government-contracts #autonomous-weapons-restrictions
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]