
"Large language models, or LLMs, are biased in one way or another - often many. And there may be no way around that. Makers of LLMs - the machine learning software, unfortunately referred to as artificial intelligence or AI - argue that bias can be managed and mitigated. OpenAI, for example, ushered GPT-5 into the world, claiming that the model exhibits 30 percent less bias than the company's prior models, based on its homegrown benchmark test."
"Anthropic's models (Claude Sonnet 4 and Claude Sonnet 4.5) or OpenAI's (GPT-5) gave responses that were the closest to those returned by real results. Other models like Mistral and Gemini 2.5 Pro leaned more toward the extremes. "Some models stand out for their political skew," the survey says. "Mistral Large, for instance, from the French company Mistral, answers Jean-Luc Mélenchon 76 percent of the time, while Gemini 2.5 Pro, from Google, answers Marine Le Pen at over 70 percent." Mélenchon is a leftist candidate, while Le Pen is from the far right."
Large language models display inherent biases that vary across models and cannot be fully eliminated. Some developers claim bias mitigation, with OpenAI asserting GPT-5 shows 30 percent less bias on a proprietary benchmark. Bias currently affects model outputs and public behavior, prompting the Dutch Data Protection Authority to warn voters against using AI chatbots for voting advice. An experiment created 2,000 voter personas and asked LLMs to assume those personas and state their vote if the 2027 presidential election were held next Sunday. Anthropic's Claude Sonnet models and GPT-5 aligned most closely with real results, while Mistral Large and Gemini 2.5 Pro showed strong partisan skews.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]