Despite the impressive achievements of current generative AI systems, the dream of Artificial General Intelligence remains far away, notwithstanding the hype offered by various tech CEOs.[1] The reasons are easy to state, if hard to quantify. Human intelligence requires three primary features, none of which have been fully cracked: logic, associative learning, and value sensitivity. I'll explain each in turn.
AI hasn't progressed as quickly as many have predicted; specifically, the idea that AI will 'self-improve' and rapidly achieve 'godlike superintelligence' has been blown out of proportion.
OpenAI's new GPT-5 model represents a significant step toward artificial general intelligence, yet it lacks crucial elements like autonomous continuous learning, limiting its full potential.
Games provide a clear, unambiguous signal of success. Their structured nature and measurable outcomes make them the perfect testbed for evaluating models and agents. They force models to demonstrate many skills including strategic reasoning, long-term planning, and dynamic adaptation against an intelligent opponent, providing a robust signal of their general problem-solving intelligence.