Jan 3, 2025
Superintelligence Is Not Omniscience
Posted by Logan Thrasher Collins in categories: futurism, robotics/AI
Heninger and Johnson argue that chaos theory limits the capacity of even superintelligent systemsin simulating/predicting events within our universe, implying that Yudkowskian warnings may not accurately describe the nature of ASI risk. #aisafety #ai #tech
Completely eliminating the uncertainty would require making measurements with perfect precision, which does not seem to be possible in our universe. We can prove that fundamental sources of uncertainty make it impossible to know important things about the future, even with arbitrarily high intelligence. Atomic scale uncertainty, which is guaranteed to exist by Heisenberg’s Uncertainty Principle, can make macroscopic motion unpredictable in a surprisingly short amount of time. Superintelligence is not omniscience.
Chaos theory thus allows us to rigorously show that there are ceilings on some particular abilities. If we can prove that a system is chaotic, then we can conclude that the system offers diminishing returns to intelligence. Most predictions of the future of a chaotic system are impossible to make reliably. Without the ability to make better predictions, and plan on the basis of these predictions, intelligence becomes much less useful.