{"id":207104,"date":"2025-02-24T05:03:06","date_gmt":"2025-02-24T11:03:06","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/02\/taking-ai-welfare-seriously"},"modified":"2025-02-24T05:03:06","modified_gmt":"2025-02-24T11:03:06","slug":"taking-ai-welfare-seriously","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/02\/taking-ai-welfare-seriously","title":{"rendered":"Taking AI Welfare Seriously"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/Zc2446WOKJg?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper \u2018Taking AI Welfare Seriously\u2019 and his up and coming book \u2018The Moral Circle\u2019, Sebo examines how to detect markers of sentience in AI systems, and what to do about it. We explore ethical considerations through the lens of population ethics, AI governance (especially important in an AI arms race), and discuss indirect approaches detecting sentience, as well as AI aiding in human welfare. This rigorous conversation probes the foundations of consciousness, moral relevance, and the future of ethical AI design.<\/p>\n<p>Paper \u2018Taking AI Welfare Seriously\u2019: <a href=\"https:\/\/eleosai.org\/papers\/20241030_T\">https:\/\/eleosai.org\/papers\/20241030_T<\/a>\u2026 \u2014 The Moral Circle by Jeff Sebo: <a href=\"https:\/\/www.amazon.com.au\/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20\">https:\/\/www.amazon.com.au\/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20<\/a>\u2026 Jeff\u2019s Website: <a href=\"https:\/\/jeffsebo.net\/\">https:\/\/jeffsebo.net\/<\/a> Eleos AI: <a href=\"https:\/\/eleosai.org\/\">https:\/\/eleosai.org\/<\/a> Chapters: 00:00 Intro 01:40 Implications of failing to take AI welfare seriously 04:43 Engaging the disengaged 08:18 How Blake Lemoine\u2019s \u2018disclosure\u2019 influenced public discourse 12:45 Will people take AI sentience seriously if it is seen tools or commodities? 16:19 Importance, neglectedness and tractability (INT) 20:40 Tractability: Difficulties in measuring moral significance \u2014 i.e. by aggregate brain mass 22:25 Population ethics and the repugnant conclusion 25:16 Pascal\u2019s mugging: low probabilities of infinite or astronomically large costs and rewards 31:21 Distinguishing real high stakes causes from infinite utility scams 33:45 The nature of consciousness, and what to measure in looking for moral significance in AI 39:35 Varieties of views on what\u2019s important. Computational functionalism 44:34 AI arms race dynamics and the need for governance 48:57 Indirect approaches to achieving ideal solutions \u2014 Indirect normativity 51:38 The marker method \u2014 looking for morally relevant behavioral &amp; anatomical markers in AI 56:39 What to do about suffering in AI? 1:00:20 Building in fault tolerance to noxious experience into AI systems \u2014 reverse wireheading 1:05:15 Will AI be more friendly if it has sentience? 1:08:47 Book: The Moral Circle by Jeff Sebo 1:09:46 What kind of world could be achieved 1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems 1:16:30 AI to help humans improve mood and quality of experience 1:18:48 How to find out more about Jeff Sebo\u2019s research 1:19:12 How to get involved Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: <a href=\"https:\/\/docs.google.com\/forms\/d\/1mr9P\">https:\/\/docs.google.com\/forms\/d\/1mr9P<\/a>\u2026 Kind regards, Adam Ford <\/p>\n<ul class=\"\" dir=\"ltr\">\n<li>Science, Technology &amp; the Future \u2014 #SciFuture \u2014 <a href=\"http:\/\/scifuture.org\">http:\/\/scifuture.org<\/a><\/li>\n<\/ul>\n<p>Book \u2014 The Moral Circle by Jeff Sebo: <a href=\"https:\/\/www.amazon.com.au\/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20\">https:\/\/www.amazon.com.au\/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20<\/a>\u2026<\/p>\n<p>Jeff\u2019s Website: <a href=\"https:\/\/jeffsebo.net\/\">https:\/\/jeffsebo.net\/<\/a><\/p>\n<p>Eleos AI: <a href=\"https:\/\/eleosai.org\/\">https:\/\/eleosai.org\/<\/a><\/p>\n<p>Chapters:<\/p>\n<div class=\"more-link-wrapper\"> <a class=\"more-link\" href=\"https:\/\/lifeboat.com\/blog\/2025\/02\/taking-ai-welfare-seriously\">Continue reading \u201cTaking AI Welfare Seriously\u201d | &gt;<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper \u2018Taking AI [\u2026]<\/p>\n","protected":false},"author":510,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[30,1759,9,6],"tags":[],"class_list":["post-207104","post","type-post","status-publish","format-standard","hentry","category-ethics","category-governance","category-military","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/207104","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/510"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=207104"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/207104\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=207104"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=207104"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=207104"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}