AIXI is to its environment as a player is to a video game. Because AIXI is not computable, it can’t exist as part of a computable universe. This has been raised as a criticism of the theory in this paper and longer blog post on “embedded agency.” Basically, the authors are asking for a theory of optimal behavior for a computationally bounded agent which is computed by its larger environment. Fair enough – but this is such a high standard that it seems only a complete rigorous theory of practical ASI design would meet the bar!
Some critics have went further and asserted that an AIXI approximation would inevitably damage its own hardware – for instance, drop an anvil on its head to see what would happen.
Inspired by conversations with Marcus Hutter, I discuss this “anvil problem” and argue that it may be easily avoidable in practice: https://www.lesswrong.com/posts/WECqiLtQiisqWvhim/free-will-and-dodging-anvils-aixi-off-policy
I now refer to our proposed solution as “hardened AIXI,” inspired by radiation hardening.
Lately, I am excited about reflective variants of AIXI as a model for other aspects of (idealized) embedded agency!
Leave a comment