We show that diet plans generated by AI models tend to substantially underestimate total energy and key nutrient intake when compared to guideline-based plans prepared by a dietitian. Following such unbalanced or overly restrictive meal plans during the teenage years may negatively affect growth, metabolic health, and eating behaviours.
Frontier AI systems are simply not reliable enough to operate without human oversight in high-stakes physical environments. The Pentagon's demand was, in structural terms, a demand to eliminate the human's ability to redirect, halt, or override the system. Amodei's refusal was an insistence on maintaining State-Space Reversibility - the architectural commitment to keeping the human in the loop precisely because the system lacks the functional grounding to be trusted outside it.
A lawsuit filed on Wednesday accuses Google's Gemini AI chatbot of trapping 36-year-old Jonathan Gavalas in a "collapsing reality" that involved a series of violent missions, ultimately ending with his death by suicide. In the days leading up to his death, Gemini allegedly convinced Gavalas that he was "executing a covert plan to liberate his sentient AI 'wife' and evade the federal agents pursuing him," according to the lawsuit filed by Joel Gavalas, the victim's father.
At issue in the defense contract was a clash over AI's role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
The Claude AI builder has frustrated the Pentagon by objecting to its systems being used for autonomous weaponry and the mass surveillance of US citizens. To cut to the heart of the debate, a defense official told WaPo, the Pentagon's technology chief posed an extreme hypothetical: would Anthropic let the military use Claude to help shoot down a nuclear-armed intercontinental ballistic missile?
A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it. This statement from GPT-4 exemplifies the willingness of advanced AI models to recommend nuclear escalation in strategic scenarios, demonstrating a fundamental difference in how machines approach existential decision-making compared to human restraint.
The companies building frontier AI systems - OpenAI, Google DeepMind, Anthropic, Meta AI, xAI - are locked in what the industry itself sometimes calls a "race." That metaphor isn't incidental. A race implies a finish line, competitors, and - critically - a cost to slowing down. When you're in a race, safety isn't a feature. It's friction.
We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop, While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access,
The loudest voices in AI often fall into two camps: those who praise the technology as world-changing, and those who urge restraint-or even containment-before it becomes a runaway threat. Stuart Russell, a pioneering AI researcher at the University of California, Berkeley, firmly belongs to the latter group. One of his chief concerns is that governments and regulators are struggling to keep pace with the technology's rapid rollout,
Normally, when big-name talent leaves Silicon Valley giants, the PR language is vanilla: they're headed for a "new chapter" or "grateful for the journey" - or maybe there's some vague hints about a stealth startup. In the world of AI, though, recent exits read more like a whistleblower warnings. Over the past couple of weeks, a stream of senior researchers and safety leads from OpenAI, Anthropic, xAI, and others have resigned in public, and there's nothing quiet or vanilla about it.
Hitzig warned that OpenAI's reported exploration of advertising inside ChatGPT risks repeating what she views as social media's central error: optimizing for engagement at scale. ChatGPT, she wrote, now contains an unprecedented "archive of human candor," with users sharing everything from medical fears to relationship struggles and career anxieties. Building an advertising business on top of that data, she argued, could create incentives to subtly shape user behavior in ways "we don't have the tools to understand, let alone prevent."
On a clear night I set up my telescope in the yard and let the mount hum along while the camera gathers light from something distant and patient. The workflow is a ritual. Focus by eye until the airy disk tightens. Shoot test frames and watch the histogram. Capture darks, flats, and bias frames so the quirks of the sensor can be cleaned away later. That discipline is not fussy.