In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper ‘Taking AI Welfare Seriously’ and his up and coming book ‘The Moral Circle’, Sebo examines how to detect markers of sentience in AI systems, and what to do about it. We explore ethical considerations through the lens of population ethics, AI governance (especially important in an AI arms race), and discuss indirect approaches detecting sentience, as well as AI aiding in human welfare. This rigorous conversation probes the foundations of consciousness, moral relevance, and the future of ethical AI design.
Paper ‘Taking AI Welfare Seriously’: https://eleosai.org/papers/20241030_T… — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20… Jeff’s Website: https://jeffsebo.net/ Eleos AI: https://eleosai.org/ Chapters: 00:00 Intro 01:40 Implications of failing to take AI welfare seriously 04:43 Engaging the disengaged 08:18 How Blake Lemoine’s ‘disclosure’ influenced public discourse 12:45 Will people take AI sentience seriously if it is seen tools or commodities? 16:19 Importance, neglectedness and tractability (INT) 20:40 Tractability: Difficulties in measuring moral significance — i.e. by aggregate brain mass 22:25 Population ethics and the repugnant conclusion 25:16 Pascal’s mugging: low probabilities of infinite or astronomically large costs and rewards 31:21 Distinguishing real high stakes causes from infinite utility scams 33:45 The nature of consciousness, and what to measure in looking for moral significance in AI 39:35 Varieties of views on what’s important. Computational functionalism 44:34 AI arms race dynamics and the need for governance 48:57 Indirect approaches to achieving ideal solutions — Indirect normativity 51:38 The marker method — looking for morally relevant behavioral & anatomical markers in AI 56:39 What to do about suffering in AI? 1:00:20 Building in fault tolerance to noxious experience into AI systems — reverse wireheading 1:05:15 Will AI be more friendly if it has sentience? 1:08:47 Book: The Moral Circle by Jeff Sebo 1:09:46 What kind of world could be achieved 1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems 1:16:30 AI to help humans improve mood and quality of experience 1:18:48 How to find out more about Jeff Sebo’s research 1:19:12 How to get involved Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford
Book — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20…
Jeff’s Website: https://jeffsebo.net/
Eleos AI: https://eleosai.org/
Chapters:
]]>Before long, machines will become vastly more intelligent than humans…either accept that humans will become the second most intelligent species or impose a global ban — he will speak at Future Day.
]]>Hugo de Garis believes that too many commentators on AI are avoiding the fundamental issue: before long, machines will become vastly more intelligent than humans—potentially trillions of trillions of times more, or even beyond that. Humanity will soon face a critical decision: either accept that humans will become the second most intelligent species or impose a global ban on the creation of artilects (artificial intellects).
Link in comments!
Future Day is coming up — no fees — just pure uncut futurology — spanning timezones — Feb 28th-March 1st.
We have: * Hugo de Garis on AI, Humanity & the Longterm * Linda MacDonald Glenn on Imbuing AI with Wisdom * James Barrat discussing new book ‘The Intelligence Explosion’ * Kristian Rönn on The Darwinian Trap * Phan, Xuan Tan on AI Safety in Education * Robin Hanson on Cultural Drift * James Hughes & James Newton-Thomas discussing Human Wage Crash & UBI * James Hughes on The Future Virtual You * Ben Goertzel & Hugo de Garis doing a Singularity Salon * Susan Schneider, Ben Goertzel & Robin Hanson discussing Ghosts in the Machine: Can AI Ever Wake Up? * Shun Yoshizawa (& Ken Mogi?) on LLM Metacognition.
]]>Why not celebrate the amazing future we are collectively creating?