Alexander Meulemans and Rajai Nasser will discuss their work with Google’s Paradigms of Intelligence Team on eMbedded Universal Predictive Intelligence (MUPI): https://www.arxiv.org/abs/2511.22226
Abstract:
The standard theory of model-free reinforcement learning assumes that the environment dynamics are stationary and that agents are decoupled from their environment, such that policies are treated as being separate from the world they inhabit. This leads to theoretical challenges in the multi-agent setting where the non-stationarity induced by the learning of other agents demands prospective learning based on prediction models. To accurately model other agents, an agent must account for the fact that those other agents are, in turn, forming beliefs about it to predict its future behavior, motivating agents to model themselves as part of the environment. Here, building upon foundational work on universal artificial intelligence (AIXI), we introduce a mathematical framework for prospective learning and embedded agency centered on self-prediction, where Bayesian RL agents predict both future perceptual inputs and their own actions, and must therefore resolve epistemic uncertainty about themselves as part of the universe they inhabit. We show that in multi-agent settings, self-prediction enables agents to reason about others running similar algorithms, leading to new game-theoretic solution concepts and novel forms of cooperation unattainable by classical decoupled agents. Moreover, we extend the theory of AIXI, and study universally intelligent embedded agents which start from a Solomonoff prior. We show that these idealized agents can form consistent mutual predictions and achieve infinite-order theory of mind, potentially setting a gold standard for embedded multi-agent learning.
This represents major progress on self-reflection and embeddedness in the AIXI framework, which is one of its most important open conceptual problems. In particular, confusion about goal formation / representation for embedded agents is a serious obstacle to AI alignment, which I hope to see addressed in future work!
This talk will start an hour earlier than normal, tomorrow (Monday the 15th) at 2 pm ET, to accommodate its 90-120 minute length (!) as necessary for introducing this massive paper. We will use the ordinary zoom link: https://uwaterloo.zoom.us/j/7921763961?pwd=TDatET6CBu47o4TxyNn9ccL2Ia8HN4.1
Hope to see you all there!
Leave a comment