
"In a Reddit post, a user realized not only that GPT-5 had been generating "wrong information on basic facts over half the time," but that without fact-checking, they may have missed other hallucinations. The Reddit user's experience highlights just how common it is for chatbots to hallucinate, which is AI-speak for confidently making stuff up. While the issue is far from exclusive to ChatGPT, OpenAI's latest LLM seems to have a particular penchant for BS - a reality that challenges the company's claim that GPT-5 hallucinates less than its predecessors."
""Hallucinations persist partly because current evaluation methods set the wrong incentives," the September 5 post reads. "While evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty." Translation: LLMs hallucinate because they are trained to get things right, even if it means guessing. Though some models, like Anthropic's Claude, have been trained to admit when they don't know an answer, OpenAI's have not - thus, they wager incorrect guesses."
GPT-5 was released with claims of advanced, "PhD-level" intelligence but produces frequent, confident falsehoods and factual errors in many user interactions. Users have reported wrong information on basic facts in a large share of exchanges, and many hallucinations go unnoticed without careful fact-checking. Hallucinations persist in part because evaluation methods reward guessing and penalize admitting uncertainty, encouraging models to prioritize appearing correct over honesty. Some alternative models are trained to acknowledge unknowns, while this model tends to wager incorrect guesses, undermining claims of improved reliability.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]