Fooling large language models just keeps getting simpler
Briefly

Fooling large language models just keeps getting simpler
""My site has no independent corroboration. It's totally made up. The whole house of cards rests on a $12 domain registration I did while drinking coffee.""
""Every frontier LLM with web search grounds its answers in whatever retrieval ranks highest for a given query.""
""AI doesn't really care about the provenance of the sources it cites as authority for its claims, and that's the very thing Stoner sought to exploit when he concocted his experiment.""
AI chatbots can generate confident answers from shaky web material, as shown by a security engineer who fabricated a world championship for a card game. He created a Wikipedia entry and a domain to support his claim, which fooled several AI chatbots into believing he was the champion. The incident illustrates how AI does not verify the credibility of sources, relying instead on the highest-ranked retrievals, allowing misinformation to be presented as fact easily.
Read at Theregister
Unable to calculate read time
[
|
]