{"id":233083,"date":"2026-03-11T19:03:35","date_gmt":"2026-03-12T00:03:35","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2026\/03\/joscha-bach-anders-sandberg"},"modified":"2026-03-11T19:03:35","modified_gmt":"2026-03-12T00:03:35","slug":"joscha-bach-anders-sandberg","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2026\/03\/joscha-bach-anders-sandberg","title":{"rendered":"Joscha Bach &amp; Anders Sandberg"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/IzbtOzXMLOo?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It\u2019s a deep and winding discussion with so many interesting topics covered!<\/p>\n<p>0:00 Intro.<br \/> 0:37 What is consciousness? Phenomenology \u2014 functionalism &amp; panpsychism.<br \/> 1:54 Causal boundaries \u2014 the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.<br \/> 3:20 Minds are not states \u2014 they are processes. We don\u2019t see causal filtering in tables.<br \/> 5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.<br \/> 9:49 Methodological humility about armchair philosophy of mind.<br \/> 12:41 Putnam-style Brain-in-a-vat \u2014 and why standard objections to AI minds fall flat.<br \/> 16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?<br \/> 22:35 Why stepping outside yourself is powerful \u2014 seeing.<br \/> 25:12 Are AIs born enlightened?<br \/> 26:25 Are LLMs AGI yet? What\u2019s still missing.<br \/> 28:16 AI, hybrid minds, and the limits of human augmentation.<br \/> 32:32 Can minds be extended \u2014 in humans, dogs, and cats?<br \/> 36:19 Why human language may not be open-ended enough.<br \/> 39:41 Why AI is so data-hungry \u2014 and why better algorithms must exist.<br \/> 43:39 Why better representations matter more than raw compute (grokking was surprising)<br \/> 48:46 How babies build a world model from touch and perception.<br \/> 51:05 What comes after copilots: agent teams, multimodality and new AI workflows.<br \/> 55:32 Can AI help us discover new forms of taste and aesthetics.<br \/> 59:49 Using AI to learn art history and invent a transhumanist aesthetic.<br \/> 1:01:47 When AI helps everyone looks professional, what still counts as real skill?<br \/> 1:03:56 What happens when the self starts to merge with AI<br \/> 1:05:43 How AI changes the way we think and create.<br \/> 1:08:10 What happens when AI starts shaping human relationships.<br \/> 1:11:18 Why feeling in control can matter more than being right.<br \/> 1:12:58 Why intelligence without wisdom is very dangerous.<br \/> 1:17:45 AI via scaling statistical pattern matching vs symbolic (&amp; causal) reasoning. Can LLMs learn causality or just correlation?<br \/> 1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.<br \/> 1:24:02 10 years to the singularity?<br \/> 1:25:27 AI, coordination and the corruption problem.<br \/> 1:29:47 Can AI become more moral than us (humans)? and if so, should it?<br \/> 1:34:31 Why pluralism still leaves moral collisions unresolved.<br \/> 1:34:31 Traversing the landscape of norms (value)<br \/> 1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)<br \/> 1:43:08 Moral realism, evolution &amp; game-theoretic symmetries.<br \/> 1:48:01 Is there a global optimum of moral coordination? Is that god?<br \/> 1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.<br \/> 1:59:36 Will superintelligences converge into a cosmic singleton?<\/p>\n<p>Many thanks for tuning in!<br \/> Please support SciFuture by subscribing and sharing!<br \/> Buy me a coffee? <a href=\"https:\/\/buymeacoffee.com\/tech101z\">https:\/\/buymeacoffee.com\/tech101z<\/a>.<\/p>\n<p>Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?<br \/> Please fill out this form: <a href=\"https:\/\/docs.google.com\/forms\/d\/1mr9P\">https:\/\/docs.google.com\/forms\/d\/1mr9P<\/a>\u2026 regards, Adam Ford <\/p>\n<ul class=\"\" dir=\"ltr\">\n<li>Science, Technology &amp; the Future \u2014 #SciFuture \u2014 <a href=\"http:\/\/scifuture.org\">http:\/\/scifuture.org<\/a><\/li>\n<\/ul>\n<p>Kind regards.<br \/> Adam Ford.<br \/> Science, Technology &amp; the Future \u2014 #SciFuture \u2014 <a href=\"http:\/\/scifuture.org\">http:\/\/scifuture.org<\/a><\/p>\n<p><a target=\"_blank\"><\/p>\n<div style=\"clear:both;\">Read more<\/div>\n<p><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It\u2019s a deep and winding discussion with so many interesting topics covered! 0:00 Intro. 0:37 What [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1499,30,41,6,64,1501],"tags":[],"class_list":["post-233083","post","type-post","status-publish","format-standard","hentry","category-cyborgs","category-ethics","category-information-science","category-robotics-ai","category-singularity","category-transhumanism-2"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/233083","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=233083"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/233083\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=233083"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=233083"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=233083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}