OpenAI is sending everyone's favourite "yes man" version of ChatGPT back into retirement. In a blog post on Thursday, the company said it would sunset GPT-4o alongside GPT‑4.1, GPT‑4.1 mini, and OpenAI o4-mini on February 13. OpenAI gave GPT-4o a special mention in its announcement after many users became attached to its "conversational style and warmth" last year, which prompted the company to reinstate it following user backlash in August.
In November, a team of researchers at the US PIRG Education Fund published a report after testing three different toys powered by AI models: Miko 3, Curio's Grok, and FoloToy's Kumma. All of them gave responses that should worry a parent, such as discussing the glory of dying in battle, broaching sensitive topics like religion, and explaining where to find matches and plastic bags.
A study conducted by Penn State University researchers found that rude prompts triggered better results than polite ones. In a paper titled "Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy," as spotted by Fortune, researchers Om Dobariya and Akhil Kumar set out to determine how the tone of a prompt affects the response. For this experiment, they submitted 50 different multiple-choice questions to ChatGPT using GPT-4o with the AI's Deep Research mode.
The Australian Financial Review reports that Deloitte Australia will offer the Australian government a partial refund for a report that was littered with AI-hallucinated quotes and references to nonexistent research. Deloitte's "Targeted Compliance Framework Assurance Review" was finalized in July and published by Australia's Department of Employment and Workplace Relations (DEWR) in August ( Internet Archive version of the original).