Zhengmian Hu is a researcher at Adobe Research. He received his Ph.D. from the University of Maryland, College Park, and has published across topics including watermarking, deep learning theory, and optimization theory, with an emphasis on theoretical foundations for machine learning. At our next research meeting (3 pm ET on Monday, January 26), he will present a new viewpoint that treats a “concept” as an algorithmic-information object.

Title: Dialectics for Artificial Intelligence

Abstract:
In this talk, we present an algorithmic-information-theoretic notion of reversible decomposition for finite binary strings. We start from a mutual determination constraint: a tuple of strings is feasible when each component is (up to logarithmic slack) algorithmically recoverable from the remaining components together with an optional context. Among admissible decompositions, we introduce an objective, excess information, intended to quantify the algorithmic redundancy created by a particular decomposition. We then show that optimizing excess information leads to a local update mechanism, in which newly encountered “information patches” are contested by different parts and flow toward the part that yields the shortest conditional description, producing structure revision over time. One interpretation of such optimization is a notion of concept formation relevant to AI and multi-agent alignment. After the talk we may discuss: (i) theorems in the paper; (ii) when the optimization admits meaningful minimizers; (iii) potential extensions.

Posted in

Leave a comment