Google stopped a zero-day hack that it says was developed with AI
Briefly

Google stopped a zero-day hack that it says was developed with AI
"Google’s researchers found hints in the Python script used for the exploit that indicated help from AI, like a "hallucinated CVSS score" and "structured, textbook" formatting consistent with LLM training data. The exploit takes advantage of "a high-level semantic logic flaw where the developer hardcoded a trust assumption" in the platform's 2FA system."
"According to a report from Google Threat Intelligence Group (GTIG), "prominent cyber crime threat actors" were planning to use the vulnerability for a "mass exploitation event" that would have allowed them to bypass two-factor authentication on an unnamed "open-source, web-based system administration tool.""
"Google says it was able to "disrupt" this particular exploit, but also says hackers are increasingly using AI to find and take advantage of security vulnerabilities. The report also mentions AI as a target for attackers, saying "GTIG has observed adversaries increasingly target the integrated components that grant AI systems their utility, such as autonomous skills and third-party data connectors.""
"Google's report also details how hackers are using "persona-driven jailbreaking" to get AI to find security vulnerabilities for them, like an example prompt that instructs the AI to pretend it's a security expert. Hackers are also feeding AI models whole repositories of vulnerability data, and using OpenClaw in ways that suggest "an interest in refining AI-g""
Google Threat Intelligence Group reported stopping a zero-day exploit developed with AI assistance. The planned attack involved a mass exploitation event to bypass two-factor authentication on an unnamed open-source, web-based system administration tool. Researchers found indicators in a Python exploit script, including a hallucinated CVSS score and structured formatting consistent with language-model training. The exploit relied on a high-level semantic logic flaw where the developer hardcoded a trust assumption in the platform’s 2FA system. Google said it disrupted the exploit and noted that attackers increasingly use AI to discover and exploit vulnerabilities. The report also described adversaries targeting AI-integrated components and using persona-driven jailbreaking prompts to induce AI to search for security weaknesses.
Read at The Verge
Unable to calculate read time
[
|
]