CS Colloquium Speaker: Stefano Soatto, Tues. Feb 24 at 3:20pm
CS Colloquium Speaker: Stefano Soatto from UCLA & Amazon Date & Time: Tuesday, February 24, 2026 - 3:20pm Location: CS 105 Host: Jia Deng Webinar registration: [ https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug | https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug ] Title: AI Agents as Universal Task Solvers: Time, Information, and Intelligence in LLMs viewed as Maximalistic Models of Computation Abstract: Scaling laws predict that AI agents will steadily improve and eventually exceed human performance across a wide range of tasks. Yet at the limit of these scaling laws lies a form of inference that involves no intelligence at all: with increasing compute and memory, a model can brute-force any verifiable task without learning anything from past experience. Universally optimal inference, pioneered by Solomonoff and Levin, requires no insight — only exhaustive search. This raises a basic question: if scaling alone does not foster intelligence, what does? And if performance on downstream tasks is insufficient to measure intelligence, what is? In this talk, I will point to the critical role of time in both analyzing and fostering the emergent reasoning behavior of AI agents. Building on insights that Solomonoff sketched in 1985 but that remained theoretical curiosities for decades, I will show that the value of learning is measured not by a reduction in uncertainty — the core of inductive learning and generalization — but by a reduction in the time needed to solve new tasks. A key result is that data can make a universal solver exponentially faster, with the speed-up tightly characterized by a single quantity: the algorithmic mutual information between past experience and the solution to unforeseen tasks. Connecting these ideas to modern AI requires rethinking what computation means for systems powered by large language models. Unlike minimalistic models of computation such as Turing Machines, LLMs are stochastic dynamical systems whose computational elements — context, weights, activations, chain-of-thought — do not resemble a "program" in the ordinary sense. I will show that LLMs are instead maximalistic models of computation: universal, like Turing Machines, but operating through entirely different and in many ways antithetical mechanisms. Programming such systems can be achieved through two-level control strategies — open-loop planning and closed-loop feedback — in abstract space, a framework we have recently released Strands Agents open-source library (www.strandsagents.com). Once time is properly accounted for, scaling laws reveal an inversion : beyond a critical point, increasing resources improve benchmark accuracy while diminishing conceptual depth— a savant regime in which models improve while learning less. I will discuss what this means for how we build, evaluate, and scale AI agents. Bio: Stefano Soatto is a Vice President at AWS Agentic AI, and a Professor of Computer Science at UCLA. He received his PhD in Control and Dynamical Systems from the California Institute of Technology, his D.Ing from the University of Padova, Italy and was a postdoctoral scholar at Harvard University. He is a Fellow of the ACM and of the IEEE.
CS D istinguished Colloquium Speaker: Stefano Soatto from UCLA & Amazon Date & Time: Tuesday, February 24, 2026 - 3:20pm Location: CS 105 Host: Jia Deng Webinar registration: [ https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug | https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug ] Title: AI Agents as Universal Task Solvers: Time, Information, and Intelligence in LLMs viewed as Maximalistic Models of Computation Abstract: Scaling laws predict that AI agents will steadily improve and eventually exceed human performance across a wide range of tasks. Yet at the limit of these scaling laws lies a form of inference that involves no intelligence at all: with increasing compute and memory, a model can brute-force any verifiable task without learning anything from past experience. Universally optimal inference, pioneered by Solomonoff and Levin, requires no insight — only exhaustive search. This raises a basic question: if scaling alone does not foster intelligence, what does? And if performance on downstream tasks is insufficient to measure intelligence, what is? In this talk, I will point to the critical role of time in both analyzing and fostering the emergent reasoning behavior of AI agents. Building on insights that Solomonoff sketched in 1985 but that remained theoretical curiosities for decades, I will show that the value of learning is measured not by a reduction in uncertainty — the core of inductive learning and generalization — but by a reduction in the time needed to solve new tasks. A key result is that data can make a universal solver exponentially faster, with the speed-up tightly characterized by a single quantity: the algorithmic mutual information between past experience and the solution to unforeseen tasks. Connecting these ideas to modern AI requires rethinking what computation means for systems powered by large language models. Unlike minimalistic models of computation such as Turing Machines, LLMs are stochastic dynamical systems whose computational elements — context, weights, activations, chain-of-thought — do not resemble a "program" in the ordinary sense. I will show that LLMs are instead maximalistic models of computation: universal, like Turing Machines, but operating through entirely different and in many ways antithetical mechanisms. Programming such systems can be achieved through two-level control strategies — open-loop planning and closed-loop feedback — in abstract space, a framework we have recently released Strands Agents open-source library (www.strandsagents.com). Once time is properly accounted for, scaling laws reveal an inversion : beyond a critical point, increasing resources improve benchmark accuracy while diminishing conceptual depth— a savant regime in which models improve while learning less. I will discuss what this means for how we build, evaluate, and scale AI agents. Bio: Stefano Soatto is a Vice President at AWS Agentic AI, and a Professor of Computer Science at UCLA. He received his PhD in Control and Dynamical Systems from the California Institute of Technology, his D.Ing from the University of Padova, Italy and was a postdoctoral scholar at Harvard University. He is a Fellow of the ACM and of the IEEE. _______________________________________________ talks mailing list -- talks@lists.cs.princeton.edu To unsubscribe send an email to talks-leave@lists.cs.princeton.edu To edit subscription settings or unsubscribe, use this link: https://lists.cs.princeton.edu/mailman3/lists/talks@lists.cs.princeton.edu/
CS D istinguished Colloquium Speaker: Stefano Soatto from UCLA & Amazon Date & Time: Thursday, February 26 at 3:00pm Location: CS 105 Host: Jia Deng Webinar registration: [ https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug | https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug ] Title: AI Agents as Universal Task Solvers: Time, Information, and Intelligence in LLMs viewed as Maximalistic Models of Computation Abstract: Scaling laws predict that AI agents will steadily improve and eventually exceed human performance across a wide range of tasks. Yet at the limit of these scaling laws lies a form of inference that involves no intelligence at all: with increasing compute and memory, a model can brute-force any verifiable task without learning anything from past experience. Universally optimal inference, pioneered by Solomonoff and Levin, requires no insight — only exhaustive search. This raises a basic question: if scaling alone does not foster intelligence, what does? And if performance on downstream tasks is insufficient to measure intelligence, what is? In this talk, I will point to the critical role of time in both analyzing and fostering the emergent reasoning behavior of AI agents. Building on insights that Solomonoff sketched in 1985 but that remained theoretical curiosities for decades, I will show that the value of learning is measured not by a reduction in uncertainty — the core of inductive learning and generalization — but by a reduction in the time needed to solve new tasks. A key result is that data can make a universal solver exponentially faster, with the speed-up tightly characterized by a single quantity: the algorithmic mutual information between past experience and the solution to unforeseen tasks. Connecting these ideas to modern AI requires rethinking what computation means for systems powered by large language models. Unlike minimalistic models of computation such as Turing Machines, LLMs are stochastic dynamical systems whose computational elements — context, weights, activations, chain-of-thought — do not resemble a "program" in the ordinary sense. I will show that LLMs are instead maximalistic models of computation: universal, like Turing Machines, but operating through entirely different and in many ways antithetical mechanisms. Programming such systems can be achieved through two-level control strategies — open-loop planning and closed-loop feedback — in abstract space, a framework we have recently released Strands Agents open-source library (www.strandsagents.com). Once time is properly accounted for, scaling laws reveal an inversion : beyond a critical point, increasing resources improve benchmark accuracy while diminishing conceptual depth— a savant regime in which models improve while learning less. I will discuss what this means for how we build, evaluate, and scale AI agents. Bio: Stefano Soatto is a Vice President at AWS Agentic AI, and a Professor of Computer Science at UCLA. He received his PhD in Control and Dynamical Systems from the California Institute of Technology, his D.Ing from the University of Padova, Italy and was a postdoctoral scholar at Harvard University. He is a Fellow of the ACM and of the IEEE.
CS D istinguished Colloquium Speaker: Stefano Soatto from UCLA & Amazon Date & Time: Thursday, February 26 at 3:00pm Location: CS 105 Host: Jia Deng Webinar registration: [ https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug | https://princeton.zoom.us/webinar/register/WN_geQ0rhfcQR-fwkdQGAo7Ug ] Title: AI Agents as Universal Task Solvers: Time, Information, and Intelligence in LLMs viewed as Maximalistic Models of Computation Abstract: Scaling laws predict that AI agents will steadily improve and eventually exceed human performance across a wide range of tasks. Yet at the limit of these scaling laws lies a form of inference that involves no intelligence at all: with increasing compute and memory, a model can brute-force any verifiable task without learning anything from past experience. Universally optimal inference, pioneered by Solomonoff and Levin, requires no insight — only exhaustive search. This raises a basic question: if scaling alone does not foster intelligence, what does? And if performance on downstream tasks is insufficient to measure intelligence, what is? In this talk, I will point to the critical role of time in both analyzing and fostering the emergent reasoning behavior of AI agents. Building on insights that Solomonoff sketched in 1985 but that remained theoretical curiosities for decades, I will show that the value of learning is measured not by a reduction in uncertainty — the core of inductive learning and generalization — but by a reduction in the time needed to solve new tasks. A key result is that data can make a universal solver exponentially faster, with the speed-up tightly characterized by a single quantity: the algorithmic mutual information between past experience and the solution to unforeseen tasks. Connecting these ideas to modern AI requires rethinking what computation means for systems powered by large language models. Unlike minimalistic models of computation such as Turing Machines, LLMs are stochastic dynamical systems whose computational elements — context, weights, activations, chain-of-thought — do not resemble a "program" in the ordinary sense. I will show that LLMs are instead maximalistic models of computation: universal, like Turing Machines, but operating through entirely different and in many ways antithetical mechanisms. Programming such systems can be achieved through two-level control strategies — open-loop planning and closed-loop feedback — in abstract space, a framework we have recently released Strands Agents open-source library (www.strandsagents.com). Once time is properly accounted for, scaling laws reveal an inversion : beyond a critical point, increasing resources improve benchmark accuracy while diminishing conceptual depth— a savant regime in which models improve while learning less. I will discuss what this means for how we build, evaluate, and scale AI agents. Bio: Stefano Soatto is a Vice President at AWS Agentic AI, and a Professor of Computer Science at UCLA. He received his PhD in Control and Dynamical Systems from the California Institute of Technology, his D.Ing from the University of Padova, Italy and was a postdoctoral scholar at Harvard University. He is a Fellow of the ACM and of the IEEE.
CS Distinguished Colloquium Speaker: Maria Klawe , Math for America Date & Time: Tuesday, March 31 - 12:10pm Location: Friend Center Convocation Room Event page: [ https://www.cs.princeton.edu/events/increasing-diversity-and-inclusion-techn... | https://www.cs.princeton.edu/events/increasing-diversity-and-inclusion-techn... ] Webinar registration: [ https://princeton.zoom.us/webinar/register/WN_3rctUpSyTH6BEf43d3LZZg | https://princeton.zoom.us/webinar/register/WN_3rctUpSyTH6BEf43d3LZZg ] Title: Increasing Diversity and Inclusion in Technology Abstract: Over the past few years, the impact of technology on every aspect of society has increased dramatically as applications of AI and data science grow in almost all areas. Thus, understanding strategies to increase the participation of people who have been and remain underrepresented in technology careers, including women, people of color, and people from low-income backgrounds, is very important. This presentation describes strategies that have been successful in various environments. Bio: Maria Klawe joined Math for America as president in late 2023 after a 17-year term as Harvey Mudd College’s fifth president. Prior to joining HMC, she served as dean of engineering and professor of computer science at Princeton University. Klawe joined Princeton from the University of British Columbia where she served in various roles from 1988 to 2002. Before her time at UBC, Klawe spent eight years with IBM Research in California and two years at the University of Toronto. She received her Ph.D. (1977) and B.Sc. (1973) in mathematics from the University of Alberta. Klawe is a member of the boards of Phenome Health and the nonprofits, Museum of Mathematics, and the Institute for Computational and Experimental Research in Mathematics. She has also served as a founding advisory board member of Parity.org, a fellow for the American Academy of Arts & Sciences, and is the chair-elect for the Conference Board of the Mathematical Sciences and a trustee Emerita for the Simons Laufer Mathematical Sciences Institute. Klawe was ranked 17 on Fortune’s 2014 list of the World’s 50 Greatest Leaders.
CS Distinguished Colloquium Speaker: Maria Klawe , Math for America Date & Time: Tuesday, March 31 - 12:10pm Location: Friend Center Convocation Room Event page: [ https://www.cs.princeton.edu/events/increasing-diversity-and-inclusion-techn... | https://www.cs.princeton.edu/events/increasing-diversity-and-inclusion-techn... ] Webinar registration: [ https://princeton.zoom.us/webinar/register/WN_3rctUpSyTH6BEf43d3LZZg | https://princeton.zoom.us/webinar/register/WN_3rctUpSyTH6BEf43d3LZZg ] Title: Increasing Diversity and Inclusion in Technology Abstract: Over the past few years, the impact of technology on every aspect of society has increased dramatically as applications of AI and data science grow in almost all areas. Thus, understanding strategies to increase the participation of people who have been and remain underrepresented in technology careers, including women, people of color, and people from low-income backgrounds, is very important. This presentation describes strategies that have been successful in various environments. Bio: Maria Klawe joined Math for America as president in late 2023 after a 17-year term as Harvey Mudd College’s fifth president. Prior to joining HMC, she served as dean of engineering and professor of computer science at Princeton University. Klawe joined Princeton from the University of British Columbia where she served in various roles from 1988 to 2002. Before her time at UBC, Klawe spent eight years with IBM Research in California and two years at the University of Toronto. She received her Ph.D. (1977) and B.Sc. (1973) in mathematics from the University of Alberta. Klawe is a member of the boards of Phenome Health and the nonprofits, Museum of Mathematics, and the Institute for Computational and Experimental Research in Mathematics. She has also served as a founding advisory board member of Parity.org, a fellow for the American Academy of Arts & Sciences, and is the chair-elect for the Conference Board of the Mathematical Sciences and a trustee Emerita for the Simons Laufer Mathematical Sciences Institute. Klawe was ranked 17 on Fortune’s 2014 list of the World’s 50 Greatest Leaders.
CS Distinguished Colloquium Speaker: Maria Klawe , Math for America Date & Time: Tuesday, March 31 - 12:10pm Location: Friend Center Convocation Room Event page: [ https://www.cs.princeton.edu/events/increasing-diversity-and-inclusion-techn... | https://www.cs.princeton.edu/events/increasing-diversity-and-inclusion-techn... ] Webinar registration: [ https://princeton.zoom.us/webinar/register/WN_3rctUpSyTH6BEf43d3LZZg | https://princeton.zoom.us/webinar/register/WN_3rctUpSyTH6BEf43d3LZZg ] Title: Increasing Diversity and Inclusion in Technology Abstract: Over the past few years, the impact of technology on every aspect of society has increased dramatically as applications of AI and data science grow in almost all areas. Thus, understanding strategies to increase the participation of people who have been and remain underrepresented in technology careers, including women, people of color, and people from low-income backgrounds, is very important. This presentation describes strategies that have been successful in various environments. Bio: Maria Klawe joined Math for America as president in late 2023 after a 17-year term as Harvey Mudd College’s fifth president. Prior to joining HMC, she served as dean of engineering and professor of computer science at Princeton University. Klawe joined Princeton from the University of British Columbia where she served in various roles from 1988 to 2002. Before her time at UBC, Klawe spent eight years with IBM Research in California and two years at the University of Toronto. She received her Ph.D. (1977) and B.Sc. (1973) in mathematics from the University of Alberta. Klawe is a member of the boards of Phenome Health and the nonprofits, Museum of Mathematics, and the Institute for Computational and Experimental Research in Mathematics. She has also served as a founding advisory board member of Parity.org, a fellow for the American Academy of Arts & Sciences, and is the chair-elect for the Conference Board of the Mathematical Sciences and a trustee Emerita for the Simons Laufer Mathematical Sciences Institute. Klawe was ranked 17 on Fortune’s 2014 list of the World’s 50 Greatest Leaders.
participants (1)
-
Emily C. Lawrence