Reassessing AGI2025
Measuring skill at a single, static task (like chess or Go) is a dead end for measuring true intelligence. 
      A better measure is skill-acquisition efficiency: How quickly can an agent learn new things from limited experience?
      This efficiency in humans comes from adaptive world models — actively building and refining internal simulations of the world.
      Formal proof now shows this is not just a nice idea: any general agent must contain a world model. 
      Therefore, the future of AI research must be in creating and testing systems that can actively induce these models in novel, unknown environments.