Google identifies first AI-developed zero-day exploit and thwarts planned mass exploitation event
Briefly

Google identifies first AI-developed zero-day exploit and thwarts planned mass exploitation event
"Google has identified the first zero-day exploit it believes was developed with artificial intelligence. The criminal threat actor that built it planned to use it in a mass exploitation event. Google's Threat Intelligence Group discovered the vulnerability before it was deployed, worked with the affected vendor to patch it, and disrupted the operation. The exploit, a Python script that bypasses two-factor authentication on a popular open-source system administration tool, contained hallucinated CVSS scores, educational docstrings, and the structured textbook formatting characteristic of large language model output."
"Google has high confidence that an AI model was used to find and weaponise the flaw. The disclosure comes in a report published on Monday by the Google Threat Intelligence Group that documents a maturing transition from experimental AI-enabled hacking to what GTIG calls the "industrial-scale application of generative models within adversarial workflows." State-sponsored actors from China and North Korea are using AI for vulnerability research."
"Russia-nexus threat actors are deploying AI-generated decoy code against Ukrainian targets. An Android malware called PROMPTSPY uses Google's own Gemini API to autonomously navigate victim devices, capture biometric data, and block its own uninstallation. The AI cybersecurity arms race that experts warned about is no longer theoretical. It is in Google's incident response logs."
Google’s Threat Intelligence Group identified a zero-day exploit believed to be developed with artificial intelligence before it was deployed. The planned mass exploitation event was disrupted through coordination with the affected vendor and a patch. The exploit was a Python script targeting a semantic logic flaw that bypasses two-factor authentication on a widely used open-source system administration tool. The script contained signs associated with large language model output, including hallucinated CVSS scores, educational docstrings, and structured formatting. Google reported high confidence that an AI model was used to find and weaponise the vulnerability. The report also describes state-sponsored actors using AI for vulnerability research, autonomous malware using Google’s Gemini API, and supply chain attacks targeting the AI software ecosystem.
Read at TNW | Data-Security
Unable to calculate read time
[
|
]