“LLMs as Maximalistic Models of Computation and the Inversion of Scaling Laws of AI Agents”
Scaling laws predict that AI agents will steadily improve and eventually exceed human performance across a wide range of tasks. Yet at the limit lies a form of inference that involves no intelligence whatsoever: with enough compute and memory, a model can brute-force through any verifiable task without learning anything. This raises a basic question: if scaling alone does not foster intelligence, what does?
I will argue that the answer is time. Building on results by Solomonoff and Levin, adapted to the present context, I will show that the value of learning is measured not by a reduction in uncertainty — as in classical induction — but by a reduction in the time needed to solve new tasks. Data can make a universal solver exponentially faster, with the speed-up governed by the algorithmic mutual information between past experience and unforeseen tasks.
Connecting these ideas to modern AI requires rethinking what computation means for large language models that are the computational engine of AI Agents. I will show that LLMs are maximalistic models of computation — universal, like Turing Machines, but operating through entirely different mechanisms. Once time is properly accounted for, scaling laws reveal an inversion: beyond a critical point, increasing resources improve benchmark accuracy while diminishing conceptual depth — a savant regime in which models improve while understanding less.
Stefano Soatto is a Vice President at AWS Agentic AI, and a Professor of Computer Science at UCLA. He received his PhD in Control and Dynamical Systems from the California Institute of Technology (MS ’93, PhD ’96), his D.Ing from the University of Padova, Italy, and was a postdoctoral scholar at Harvard University. He is a Fellow of the ACM and of the IEEE.
Date/Time:
Date(s) - Apr 09, 2026
4:00 pm - 5:45 pm
Location:
3400 Boelter Hall
420 Westwood Plaza Los Angeles California 90095