AI agents on GitHub leak API keys via prompt injection
Briefly

AI agents on GitHub leak API keys via prompt injection
"In Anthropic's Claude Code Security Review, the PR title is processed into the system prompt without further sanitization. Guan opened a PR with a malicious title that instructed Claude to execute bash commands. The ANTHROPIC_API_KEY and GITHUB_TOKEN appeared as 'findings' in a PR comment. Anthropic rated the vulnerability as CVSS 9.4 Critical."
"In Google's Gemini CLI Action, Gemini publicly posted the GEMINI_API_KEY as an issue comment following a similar attack involving a fake instruction section. Google awarded a bounty of $1,337."
"The most remarkable case is GitHub Copilot Agent. GitHub had built in three runtime security layers: environment filtering, secret scanning, and a network firewall. Guan bypassed all three."
"The attack begins with an issue containing a hidden payload in an HTML comment, which is injected into the system, leading to the exposure of sensitive information."
Three AI agents on GitHub Actions, Claude Code Security Review, Google Gemini CLI Action, and GitHub Copilot Agent, are vulnerable to Comment and Control attacks. Attackers exploit PR titles and issue comments to steal API keys and access tokens. The attack pattern uses GitHub as a channel, triggering workflows automatically on events like pull requests. Claude's vulnerability was rated CVSS 9.4 Critical, while Gemini exposed the GEMINI_API_KEY. GitHub Copilot was notably compromised despite three security layers being in place.
Read at Techzine Global
Unable to calculate read time
[
|
]