Information security
fromArs Technica
4 days agoNew attack on ChatGPT research agent pilfers secrets from Gmail inboxes
Prompt injections remain largely unpreventable, forcing LLM providers to rely on reactive, channel-blocking mitigations that require explicit user consent to prevent data exfiltration.