What do large language models, cellular automata, and the human brain have in common? In this polymath salon, I talk with Dugan Hammock from the Wolfram Institute to discuss the deep links between these seemingly disparate fields.
Highlights include:
Computational Irreducibility: Why we can’t take shortcuts in complex systems—whether it’s a simple cellular automaton or a sophisticated LLM generating text.
The Power of Autoregression: How the simple, step-by-step process of predicting the next element can give rise to incredible complexity and human-like language.
The Nature of Thinking: Whether our own thought processes are fundamentally autoregressive and sequential, or if there’s a different, parallel mode of cognition at play.
Memory and Consciousness: The critical role of a system’s “memory” or history in shaping its future, and how this relates to our own awareness and sense of self.