
"On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency - enough to make the UI appear frozen - some users were seeing in high mode. This was the wrong tradeoff. We reverted this change on April 7 after users told us they'd prefer to default to higher intelligence and opt into lower effort for simple tasks."
"On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive."
Anthropic's report outlines numerous changes made to its AI offerings without prior customer notification, including a reduction in answer quality. This lack of transparency raises concerns for enterprise IT executives who already struggle with control over critical applications. The report highlights specific instances where changes were reverted after user complaints, emphasizing the need for consistent performance in AI systems. The situation illustrates the growing challenges faced by organizations relying on generative AI and agentic systems, which can alter functionality without user consent.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]