AI can't finish what it starts. Humans must always check it
Briefly

The emergence of agentic AI has generated expectations of remarkable productivity improvements, but reality shows limited advancement in addressing complex intelligence challenges. Despite innovations like AutoGPT, practical applications remain unreliable. The risk of AI failures persists, as measuring success involves analyzing the probability of task completion across interconnected outputs. While individual tasks may show some improvement, the failure rate in chaining these tasks continues to be of concern, leaving many professionals essential for oversight and correction in automated processes.
Barely two-and-a-half years into the modern era of AI, we're stuck in a hype cycle that promises all our productivity Christmases will soon come at once.
Surprisingly little progress has been made on the harder problems in artificial intelligence - the problems that involve actual intelligence, such as the reflective capacities needed to understand the intention of actions undertaken, and thereby remain on task.
AutoGPT remains a tantalizing demo - but far from useful tech. You'd never deploy it in production.
The risk of failure across any chain of tasks becomes a game of probabilities: This task has a 90 percent completion rate, the next one following on from that 75 percent.
Read at Theregister
[
|
]