AI-generated code could be a disaster for the software supply chain. Here's why.
Briefly

Research reveals that AI-generated code often includes non-existent third-party libraries, creating vulnerabilities to supply-chain attacks. The study evaluated 16 popular large language models, uncovering that 440,000 of 576,000 generated code samples incorporated "hallucinated" dependencies, posing significant risks. Open source models were particularly problematic, with 21% of dependencies invalid. These false dependencies can lead to dependency confusion attacks, where attackers publish malicious packages masquerading as legitimate ones, potentially compromising software integrity and exposing large companies to security threats. This underscores the need for vigilance in software supply chains.
These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks.
AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks.
Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries.
This form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies.
Read at Ars Technica
[
|
]