{"id":232345,"date":"2026-03-01T14:11:13","date_gmt":"2026-03-01T20:11:13","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2026\/03\/roman-yampolskiy-ai-unexplainable-unpredictable-uncontrollable"},"modified":"2026-03-01T14:11:13","modified_gmt":"2026-03-01T20:11:13","slug":"roman-yampolskiy-ai-unexplainable-unpredictable-uncontrollable","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2026\/03\/roman-yampolskiy-ai-unexplainable-unpredictable-uncontrollable","title":{"rendered":"Roman Yampolskiy \u2014 AI: Unexplainable, Unpredictable, Uncontrollable"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/232345?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>In this presentation, Dr. Roman V. Yampolskiy provides a rigorous examination of the fundamental limitations of Artificial Intelligence, arguing that as systems approach and surpass human-level intelligence, they become inherently unexplainable, unpredictable, and uncontrollable. He illustrates how the black box nature of deep learning prevents full audits of decision-making, while concepts like computational irreducibility suggest we cannot forecast the actions of a smarter agent without running it \u2013 often until it is too late for safety. He asserts that there is currently no evidence or mathematical proof to guarantee that a superintelligent system can be safely contained or aligned with human values.<br \/> Dr. Yampolskiy further bridges theoretical computer science with safety engineering by applying impossibility results, such as the Halting Problem and Rice\u2019s Theorem, to demonstrate that certain safety guarantees for Artificial General Intelligence (AGI) are mathematically unreachable. These technical impediments lead to a sobering discussion on existential risk, where the inability to verify or monitor advanced systems results in an alarmingly high probability of catastrophic outcomes. By analysing why advanced AI defies traditional engineering safety standards, he makes the case that current trajectories may lead to irreversible consequences for humanity.<br \/> To conclude, the talk shifts toward potential pathways for mitigation, emphasising the urgent need to prioritise specialised, narrow AI over the pursuit of general superintelligence. Dr. Yampolskiy argues that while narrow AI can solve global challenges within controllable parameters, the pursuit of AGI represents an existential gamble. He calls for a shift in the research community from a \u201cmove fast and break things\u201d mentality to a mathematically grounded approach, urging that we must prove a problem is solvable before investing billions into its deployment.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this presentation, Dr. Roman V. Yampolskiy provides a rigorous examination of the fundamental limitations of Artificial Intelligence, arguing that as systems approach and surpass human-level intelligence, they become inherently unexplainable, unpredictable, and uncontrollable. He illustrates how the black box nature of deep learning prevents full audits of decision-making, while concepts like computational irreducibility suggest [\u2026]<\/p>\n","protected":false},"author":581,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12,2229,6],"tags":[],"class_list":["post-232345","post","type-post","status-publish","format-standard","hentry","category-existential-risks","category-mathematics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/232345","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/581"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=232345"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/232345\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=232345"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=232345"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=232345"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}