
""Claude has regressed to the point it cannot be trusted to perform complex engineering," an AMD senior director wrote in a widely shared post on GitHub."
""You can change it anytime in the /model selector if you prefer low effort (faster) or high effort (more intelligence). The setting is sticky and will persist for your next session," Anthropic's Boris Cherny said last month on X."
""Anthropic made real configuration changes that objectively reduced default thinking depth across all surfaces including claude.ai, but the most extreme 'secret nerfing' narrative overstates what happened," Claude said as part of its lengthy response."
Recent user feedback on platforms like X, GitHub, and Reddit indicates a perceived decline in Claude's performance, particularly in complex engineering tasks. Some users speculate that Claude has been intentionally 'nerfed' to manage costs or prioritize other projects. Anthropic refutes these claims, stating that adjustments were made to the reasoning level but not due to compute constraints. Analysts suggest that while configuration changes were made, the perception of decline may stem from users adjusting to the AI's capabilities over time, highlighting a broader trend in AI access fragmentation.
Read at Axios
Unable to calculate read time
Collection
[
|
...
]