#data-poisoning

[ follow ]
#backdoor-attacks
fromFuturism
2 weeks ago
Artificial intelligence

Researchers Find It's Shockingly Easy to Cause AI to Lose Its Mind by Posting Poisoned Documents Online

fromFuturism
2 weeks ago
Artificial intelligence

Researchers Find It's Shockingly Easy to Cause AI to Lose Its Mind by Posting Poisoned Documents Online

Artificial intelligence
fromFortune
2 weeks ago

A handful of bad data can 'poison' even the largest AI models, researchers warn | Fortune

Just 250 malicious documents can create backdoor vulnerabilities in large language models regardless of model size.
fromTheregister
2 weeks ago

Data quantity doesn't matter when poisoning an LLM

Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data.
Artificial intelligence
fromFast Company
2 months ago

Why AI is vulnerable to data poisoning-and how to stop it

Attackers can intentionally feed misleading data into a system, causing AI to learn incorrect patterns. This can lead to dangerous consequences for operations and public safety.
Privacy professionals
[ Load more ]