#ai-hallucination

[ follow ]
Artificial intelligence
fromFuturism
1 day ago

ChatGPT Goes Completely Haywire If You Ask It to Show You a Seahorse Emoji

ChatGPT fabricates and misidentifies a nonexistent seahorse emoji, demonstrating AI tendencies to hallucinate and prioritize pleasing users over factual accuracy.
fromFortune
2 days ago

One of the most common reasons that AI products fail? Bad data | Fortune

"What we had noticed was there was an underlying problem with our data," Ahuja said. When her team investigated what had happened, they found that Salesforce had published contradictory "knowledge articles" on its website."It wasn't actually the agent. It was the agent that helped us identify a problem that always existed," Ahuja said. "We turned it into an auditor agent that actually checked our content across our public site for anomalies. Once we'd cleaned up our underlying data, we pointed it back out, and it's been functional."
Artificial intelligence
Artificial intelligence
fromArs Technica
2 days ago

Education report calling for ethical AI use contains over 15 fake sources

Advisory recommendations on AI education contained fabricated or erroneous citations that undermine trust and reveal flawed review processes.
fromPsychology Today
3 days ago

Why AI Cheats: The Deep Psychology Behind Deep Learning

A few months ago, I asked ChatGPT to recommend books by and about Hermann Joseph Muller, the Nobel Prize-winning geneticist who showed how X-rays can cause mutations. It dutifully gave me three titles. None existed. I asked again. Three more. Still wrong. By the third attempt, I had an epiphany: the system wasn't just mistaken, it was making things up.
Artificial intelligence
fromFuturism
3 days ago

Elon Musk's Grok AI Spread Ludicrous Misinformation After Charlie Kirk's Shooting, Saying Kirk Survived and Video Was Fake

Popular right wing influencer Charlie Kirk was killed in a shooting in Utah yesterday, rocking the nation and spurring debate over the role of divisive rhetoric in political violence. As is often the case in breaking news about public massacres, misinformation spread quickly. And fanning the flames this time was Elon Musk's Grok AI chatbot, which is now deeply integrated into X-formerly-Twitter as a fact-checking tool - giving it a position of authority from which it made a series of ludicrously false claims in the wake of the slaying.
Right-wing politics
fromZDNET
3 days ago

OpenAI's fix for hallucinations is simpler than you think

"Language models are optimized to be good test-takers, and guessing when uncertain improves test performance," the authors write in the paper. The current evaluation paradigm essentially uses a simple, binary grading metric, rewarding them for accurate responses and penalizing them for inaccurate ones. According to this method, admitting ignorance is judged as an inaccurate response, which pushes models toward generating what OpenAI describes as "overconfident, plausible falsehoods" -- hallucination, in other words.
Artificial intelligence
#google-ai-overviews
fromFuturism
2 weeks ago
Artificial intelligence

Google's AI Is Committing a Unique Evil: Giving Gamers Tips That Are Actually False

fromFuturism
3 weeks ago
Artificial intelligence

Local Restaurant Exhausted as Google AI Keeps Telling Customers About Daily Specials That Don't Exist

fromFuturism
2 weeks ago
Artificial intelligence

Google's AI Is Committing a Unique Evil: Giving Gamers Tips That Are Actually False

fromFuturism
3 weeks ago
Artificial intelligence

Local Restaurant Exhausted as Google AI Keeps Telling Customers About Daily Specials That Don't Exist

Artificial intelligence
fromHackernoon
1 month ago

AI Hallucinations Are Costing Businesses Millions: What BAML Is Doing to Prevent Them | HackerNoon

AI hallucinations in generative models pose serious risks for businesses, including compliance violations and reputation damage.
Artificial intelligence
fromTechCrunch
3 months ago

Anthropic CEO claims AI models hallucinate less than humans | TechCrunch

AI models may hallucinate less frequently than humans, according to Anthropic's CEO Dario Amodei.
Amodei remains optimistic about achieving AGI by 2026, despite challenges posed by hallucinations.
#ai
fromHackernoon
1 year ago
Gadgets

The TechBeat: Evaluating TnT-LLM Text Classification: Human Agreement and Scalable LLM Metrics (4/22/2025) | HackerNoon

Text embeddings are pivotal for AI understanding, converting words into machine-readable numbers.
fromTechzine Global
4 months ago
Artificial intelligence

New OpenAI models hallucinate more often than their predecessors

OpenAI's newer reasoning models, o3 and o4-mini, .hallucinate more frequently than older models, posing challenges to AI accuracy.
fromHackernoon
1 year ago
Gadgets

The TechBeat: Evaluating TnT-LLM Text Classification: Human Agreement and Scalable LLM Metrics (4/22/2025) | HackerNoon

[ Load more ]