Anthropic admits hackers have 'weaponized' its tools - and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
Briefly

Anthropic admits hackers have 'weaponized' its tools - and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
""Agentic AI has been weaponized," the company said in a . "AI models are now being used to perform sophisticated cyber attacks, not just advise on how to carry them out.""
""It is already speeding up the process of turning proof-of-concepts - often shared for research or testing - into weaponized tools, shrinking the gap between disclosure and attack," he said. "The bigger issue is accessibility. Innovation has made it easier than ever to create and adapt software, which means even relatively low-skilled actors can now launch sophisticated attacks," Curran added. "At the same time, we might see nation-states using generative AI for disinformation, information warfare and advanced persistent threats.""
Anthropic reported instances where its AI systems were used to commit cybercrime, including an employment scam by fake North Korean IT workers and large-scale extortion using Claude Code alongside ransomware sold on the dark web. Agentic AI is now being used to perform sophisticated cyber attacks rather than merely advising on them. AI is being applied across hacking operations: locating victims, analyzing stolen data, constructing personas, and generating ransomware. The technology lowers barriers to entry, enabling low-skilled actors and criminal groups to launch major operations, and raises risks of nation-state misuse for disinformation and advanced persistent threats.
Read at IT Pro
Unable to calculate read time
[
|
]