Stephen Wolfram shares surprising new ideas and results from a scientific approach to metaphysics. Discusses time, spacetime, computational irreducibility, significance of the observer, quantum mechanics and multiway systems, ruliad, laws of nature, objective reality, existence, mathematical reality.
The trajectory of a storm, the evolution of stock prices, the spread of disease — mathematicians can describe any phenomenon that changes in time or space using what are known as partial differential equations. But there’s a problem: These “PDEs” are often so complicated that it’s impossible to solve them directly.
Mathematicians instead rely on a clever workaround. They might not know how to compute the exact solution to a given equation, but they can try to show that this solution must be “regular,” or well-behaved in a certain sense — that its values won’t suddenly jump in a physically impossible way, for instance. If a solution is regular, mathematicians can use a variety of tools to approximate it, gaining a better understanding of the phenomenon they want to study.
But many of the PDEs that describe realistic situations have remained out of reach. Mathematicians haven’t been able to show that their solutions are regular. In particular, some of these out-of-reach equations belong to a special class of PDEs that researchers spent a century developing a theory of — a theory that no one could get to work for this one subclass. They’d hit a wall.
Right now, molecules in the air are moving around you in chaotic and unpredictable ways. To make sense of such systems, physicists use a law known as the Boltzmann distribution, which, rather than describe exactly where each particle is, describes the chance of finding the system in any of its possible states. This allows them to make predictions about the whole system even though the individual particle motions are random. It’s like rolling a single die: Any one roll is unpredictable, but if you keep rolling it again and again, a pattern of probabilities will emerge.
Developed in the latter half of the 19th century by Ludwig Boltzmann, an Austrian physicist and mathematician, this Boltzmann distribution is used widely today to model systems in many fields, ranging from AI to economics, where it is called “multinomial logit.”
Now, economists have taken a deeper look at this universal law and come up with a surprising result: The Boltzmann distribution, their mathematical proof shows, is the only law that accurately describes unrelated, or uncoupled, systems.
Bach reframes AI as the endpoint of a long philosophical project to “naturalize the mind,” arguing that modern machine learning operationalizes a lineage from Aristotle to Turing in which minds, worlds, and representations are computational state-transition systems. He claims computer science effectively re-discovers animism—software as self-organizing, energ†y-harvesting “spirits”—and that consciousness is a simple coherence-maximizing operator required for self-organizing agents rather than a metaphysical mystery. Current LLMs only simulate phenomenology using deepfaked human texts, but the universality of learning systems suggests that, when trained on the right structures, artificial models could converge toward the same internal causal patterns that give rise to consciousness. Bach proposes a biological-to-machine consciousness framework and a research program (CIMC) to formalize, test, and potentially reproduce such mechanisms, arguing that understanding consciousness is essential for culture, ethics, and future coexistence with artificial minds.
Key takeaways.
▸ Speaker & lens: Cognitive scientist and AI theorist aiming to unify philosophy of mind, computer science, and modern ML into a single computationalist worldview. ▸ AI as philosophical project: Modern AI fulfills the ancient ambition to map mind into mathematics; computation provides the only consistent language for modeling reality and experience. ▸ Computationalist functionalism: Objects = state-transition functions; representations = executable models; syntax = semantics in constructive systems. ▸ Cyber-animism: Software as “spirits”—self-organizing, adaptive control processes; living systems differ from dead ones by the software they run. ▸ Consciousness as function: A coherence-maximizing operator that integrates mental states; second-order perception that stabilizes working memory; emerges early in development as a prerequisite for learning. ▸ LLMs & phenomenology: Current models aren’t conscious; they simulate discourse about consciousness using data full of “deepfaked” phenomenology. A Turing test cannot detect consciousness because performance ≠ mechanism. ▸ Universality hypothesis: Different architectures optimized for the same task tend to converge on similar internal causal structures; suggests that consciousness-like organization could arise if it’s the simplest solution to coherence and control. ▸ Philosophical zombies: Behaviorally identical but non-conscious agents may be more complex than conscious ones; evolution chooses simplicity → consciousness may be the minimal solution for self-organized intelligence. ▸ Language vs embodiment: Language may contain enough statistical structure to reconstruct much of reality; embodiment may not be strictly necessary for convergent world models. ▸ Testing for machine consciousness: Requires specifying phenomenology, function, search space, and success criteria—not performance metrics. ▸ CIMC agenda: Build frameworks and experiments to recreate consciousness-like operators in machines; explore implications for ethics, interfaces, and coexistence with future minds.
Math anxiety is a significant challenge for students worldwide. While personalized support is widely recognized as the most effective way to address it, many teachers struggle to deliver this level of support at scale within busy classrooms. New research from Adelaide University shows how artificial intelligence (AI) could help address challenges such as math anxiety by using a student’s inputs and identifying signs of anxiety or disengagement during learning.
Published in npj Science of Learning,the study suggests that when AI systems are designed to use the right data and goals, they can adapt their responses to help counteract negative emotional experiences associated with math, before these feelings escalate.
Lead researcher Dr. Florence Gabriel says AI has the potential to transform how math anxiety is supported, by offering timely, tailored interventions that step through learning and build student well-being.
A team of theoretical researchers has found duality can unveil non-invertible symmetry protected topological phases, which can lead to researchers understanding more about the properties of these phases, and uncover new quantum phases. Their study is published in Physical Review Letters.
Symmetry is one of the most fundamental concepts for understanding phases of matter in modern physics—in particular, symmetry-protected topological (SPT) phases, whose quantum mechanical properties are protected by symmetries, with possible applications in quantum computing and other fields.
Over the past few years, non-invertible symmetries, which extend the framework of conventional symmetries, have attracted significant attention in high energy physics and condensed matter physics. However, their complex mathematical structures have made it difficult to understand their corresponding phases of matter, or SPT phases.
Berkeley researchers have developed a proven mathematical framework for the compression of large reversible Markov chains—probabilistic models used to describe how systems change over time, such as proteins folding for drug discovery, molecular reactions for materials science, or AI algorithms making decisions—while preserving their output probabilities (likelihoods of events) and spectral properties (key dynamical patterns that govern the system’s long-term behavior).
While describing the dynamics of ubiquitous physical systems, Markov chains also allow for rich theoretical and computational investigation. By exploiting the special mathematical structure behind these dynamics, the researchers’ new theory delivers models that are quicker to compute, equally accurate, and easier to interpret, enabling scientists to efficiently explore and understand complex systems. This advance sets a new benchmark for efficient simulation, opening the door to scientific explorations once thought computationally out of reach.
Gravity is the most familiar force in human experience, yet it remains the least understood at a fundamental level. Despite centuries of study—from Newton’s law of universal gravitation to Einstein’s general theory of relativity—gravity stubbornly resists unification with quantum mechanics. In recent decades, this tension has led some physicists to propose a radical rethinking of gravity’s nature. According to these ideas, gravity may not be a fundamental force at all, but instead an emergent effect arising from quantum entanglement and the flow of information in spacetime.
This perspective represents a profound conceptual shift. Rather than treating gravity as something particles “exert” on one another, these theories suggest it emerges statistically, much like temperature arises from the collective motion of atoms. This article examines the scientific foundations of this idea, the key theoretical frameworks supporting it, and the evidence—both suggestive and incomplete—that motivates such claims. By analyzing gravity through quantum, thermodynamic, and informational lenses, we gain insight into one of the most ambitious research directions in modern theoretical physics.
The Standard Model of particle physics successfully describes three of the four fundamental interactions: electromagnetism, the weak force, and the strong force. Gravity, however, remains outside this framework. Attempts to quantize gravity using the same methods applied to other forces lead to mathematical infinities that cannot be renormalized.
Dr. Leonardos Gkouvelis, researcher at LMU’s University Observatory Munich and member of the ORIGINS Excellence Cluster, has solved a fundamental mathematical problem that had obstructed the interpretation of exoplanet atmospheres for decades. In a paper published in The Astrophysical Journal, Gkouvelis presents the first closed-form analytical theory of transmission spectroscopy that accounts for how atmospheric opacity varies with pressure—an effect that is crucial in the scientific exploration of real atmospheres but had until now been considered mathematically intractable.
For more than 30 years, analytical models were based on a “simplified” atmosphere, as the full mathematical treatment requires solving a complex geometric integral in the presence of altitude-dependent opacity—a problem that could only be tackled using expensive numerical simulations. However, this limitation concealed how the true vertical structure of an atmosphere alters the signals observed by telescopes.
The new model provides key insights into why many exoplanet atmospheres display “muted” spectral features, directly links laboratory molecular-physics data with astronomical observations, and significantly improves agreement with real data—both for Earth’s atmosphere and for high-precision observations of exoplanets.
MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity. These tiny structures could someday enable more energy-efficient computation. In this computing method, input data are encoded as a set of temperatures using the waste heat already present in a device.
The flow and distribution of heat through a specially designed material forms the basis of the calculation. Then the output is represented by the power collected at the other end, which is a thermostat at a fixed temperature.
The researchers used these structures to perform matrix vector multiplication with more than 99% accuracy. Matrix multiplication is the fundamental mathematical technique machine-learning models like LLMs utilize to process information and make predictions.