Artificial intelligence
fromAdExchanger
1 day agoWho Actually Owns AI Governance? | AdExchanger
AI governance is increasingly falling on privacy professionals, complicating existing governance structures and responsibilities.
The AI X Leadership Summit is designed for executives and technical leaders navigating the complex realities of AI adoption, governance, workforce transformation, and innovation.
The Anthropic Institute exists to understand and shape the consequences of powerful AI systems. We focus on the urgent questions that will determine whether these systems deliver the radical upsides that we believe are possible in science, security, economic development, and human agency-or whether they will pose a range of unprecedented new risks to humanity.
Every enterprise is becoming an agent operator - whether they planned to or not. Agents have access to the most critical systems, but there is no guarantee they will not make serious mistakes or be compromised.
For years, Anthropic has distinguished itself from peers by embracing a safety-first stance. Its flagship model, Claude, was designed with guardrails that explicitly prohibit use in fully autonomous lethal weapons or domestic surveillance. Those restrictions have been central to the company's identity and its appeal to customers wary of unfettered AI.
The dispute began when Anthropic CEO Dario Amodei voiced concern over a clause which allowed the military to use Anthropic's AI for any lawful use. Amodei asserted that the company would not allow for its technology to be used for domestic mass surveillance or autonomous weaponry, and wanted the contract to more clearly prohibit those uses.
Pentagon officials said Claude has been used to help process large volumes of information, turning dense reports and documents into summaries that analysts can review more quickly. The system has also supported planning and coordination tasks by helping teams organize logistics data and identify supply chain gaps across military operations.