In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper ‘Taking AI Welfare Seriously’ and his up and coming book ‘The Moral Circle’, Sebo examines how to detect markers of sentience in AI systems, and what to do about it. We explore ethical considerations through the lens of population ethics, AI governance (especially important in an AI arms race), and discuss indirect approaches detecting sentience, as well as AI aiding in human welfare. This rigorous conversation probes the foundations of consciousness, moral relevance, and the future of ethical AI design.
Paper ‘Taking AI Welfare Seriously’: https://eleosai.org/papers/20241030_T… — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20… Jeff’s Website: https://jeffsebo.net/ Eleos AI: https://eleosai.org/ Chapters: 00:00 Intro 01:40 Implications of failing to take AI welfare seriously 04:43 Engaging the disengaged 08:18 How Blake Lemoine’s ‘disclosure’ influenced public discourse 12:45 Will people take AI sentience seriously if it is seen tools or commodities? 16:19 Importance, neglectedness and tractability (INT) 20:40 Tractability: Difficulties in measuring moral significance — i.e. by aggregate brain mass 22:25 Population ethics and the repugnant conclusion 25:16 Pascal’s mugging: low probabilities of infinite or astronomically large costs and rewards 31:21 Distinguishing real high stakes causes from infinite utility scams 33:45 The nature of consciousness, and what to measure in looking for moral significance in AI 39:35 Varieties of views on what’s important. Computational functionalism 44:34 AI arms race dynamics and the need for governance 48:57 Indirect approaches to achieving ideal solutions — Indirect normativity 51:38 The marker method — looking for morally relevant behavioral & anatomical markers in AI 56:39 What to do about suffering in AI? 1:00:20 Building in fault tolerance to noxious experience into AI systems — reverse wireheading 1:05:15 Will AI be more friendly if it has sentience? 1:08:47 Book: The Moral Circle by Jeff Sebo 1:09:46 What kind of world could be achieved 1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems 1:16:30 AI to help humans improve mood and quality of experience 1:18:48 How to find out more about Jeff Sebo’s research 1:19:12 How to get involved Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford
- Science, Technology & the Future — #SciFuture — http://scifuture.org
Book — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20…
Jeff’s Website: https://jeffsebo.net/
Eleos AI: https://eleosai.org/
Chapters:
00:00 Intro.
01:40 Implications of failing to take AI welfare seriously.
04:43 Engaging the disengaged.
08:18 How Blake Lemoine’s ‘disclosure’ influenced public discourse.
12:45 Will people take AI sentience seriously if it is seen tools or commodities?
16:19 Importance, neglectedness and tractability (INT)
20:40 Tractability: Difficulties in measuring moral significance — i.e. by aggregate brain mass.
22:25 Population ethics and the repugnant conclusion.
25:16 Pascal’s mugging: low probabilities of infinite or astronomically large costs and rewards.
31:21 Distinguishing real high stakes causes from infinite utility scams.
33:45 The nature of consciousness, and what to measure in looking for moral significance in AI
39:35 Varieties of views on what’s important. Computational functionalism.
44:34 AI arms race dynamics and the need for governance.
48:57 Indirect approaches to achieving ideal solutions — Indirect normativity.
51:38 The marker method — looking for morally relevant behavioral & anatomical markers in AI
56:39 What to do about suffering in AI?
1:00:20 Building in fault tolerance to noxious experience into AI systems — reverse wireheading.
1:05:15 Will AI be more friendly if it has sentience?
1:08:47 Book: The Moral Circle by Jeff Sebo.
1:09:46 What kind of world could be achieved.
1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems.
1:16:30 AI to help humans improve mood and quality of experience.
1:18:48 How to find out more about Jeff Sebo’s research.
1:19:12 How to get involved.