Toggle light / dark theme

Possible Paths to Artificial General Intelligence

Yoshua Bengio (MILA), Irina Higgins (DeepMind), Nick Bostrom (FHI), Yi Zeng (Chinese Academy of Sciences), and moderator Joshua Tenenbaum (MIT) discuss possible paths to artificial general intelligence.

The Beneficial AGI 2019 Conference: https://futureoflife.org/beneficial-agi-2019/

After our Puerto Rico AI conference in 2015 and our Asilomar Beneficial AI conference in 2017, we returned to Puerto Rico at the start of 2019 to talk about Beneficial AGI. We couldn’t be more excited to see all of the groups, organizations, conferences and workshops that have cropped up in the last few years to ensure that AI today and in the near future will be safe and beneficial. And so we now wanted to look further ahead to artificial general intelligence (AGI), the classic goal of AI research, which promises tremendous transformation in society. Beyond mitigating risks, we want to explore how we can design AGI to help us create the best future for humanity.

We again brought together an amazing group of AI researchers from academia and industry, as well as thought leaders in economics, law, policy, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day technical workshop to look more deeply at how we can create beneficial AGI, and we followed that with a 2.5-day conference, in which people from a broader AI background considered the opportunities and challenges related to the future of AGI and steps we can take today to move toward an even better future.

Dr. Daniel Dennett — Freedom Evolves: Free Will, Determinism, and Evolution

This lecture was recorded on February 3, 2003 as part of the Distinguished Science Lecture Series hosted by Michael Shermer and presented by The Skeptics Society in California (1992–2015).

Can there be freedom and free will in a deterministic world? Renowned philosopher and public intellectual, Dr. Dennett, drawing on evolutionary biology, cognitive neuroscience, economics and philosophy, demonstrates that free will exists in a deterministic world for humans only, and that this gives us morality, meaning, and moral culpability. Weaving a richly detailed narrative, Dennett explains in a series of strikingly original arguments that far from being an enemy of traditional explorations of freedom, morality, and meaning, the evolutionary perspective can be an indispensable ally. In Freedom Evolves, Dennett seeks to place ethics on the foundation it deserves: a realistic, naturalistic, potentially unified vision of our place in nature.

Dr. Daniel Dennett — Freedom Evolves: Free Will, Determinism, and Evolution

Watch some of the past lectures for free online.
https://www.skeptic.com/lectures/

SUPPORT THE SOCIETY

You play a vital part in our commitment to promote science and reason. If you enjoy watching the Distinguished Science Lecture Series, please show your support by making a donation, or by becoming a patron. Your ongoing patronage will help ensure that sound scientific viewpoints are heard around the world.

Elon Musk on Artificial Intelligence (and the Basics of AI) — Documentary

This mini documentary takes a look at Elon Musk and his thoughts on artificial intelligence. Giving examples on how it is being used today — from Tesla cars to Facebook, and Instagram. And what the future of artificial intelligence has in store for us — from the risks and Elon Musk’s Neuralink chip to robotics.

The video will also go over the fundamentals and basics of artificial intelligence and machine learning. Breaking down AI for beginners into simple terms — showing what is ai, along with neural nets.

Elon Musk’s Book Recommendations (Affiliate Links)
• Life 3.0: Being Human in the Age of Artificial Intelligence https://amzn.to/3790bU1

• Our Final Invention: Artificial Intelligence and the End of the Human Era.
https://amzn.to/351t9Ta.

• Superintelligence: Paths, Dangers, Strategies.
https://amzn.to/3j28WkP

Other topics covered in this video include:

David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy

David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy.

Topics discussed in this episode include:

-Virtual reality as genuine reality.
–Why VR is compatible with the good life.
–Why we can never know whether we’re in a simulation.
–Consciousness in virtual realities.
–The ethics of simulated beings.

You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-realit…hilosophy/

Listen to the audio version of this episode here: https://podcasts.apple.com/us/podcast/david-chalmers-on-real…0549092329

Check out David’s book and website here: http://consc.net/

Robert Long–Artificial Sentience, Digital Minds

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever’s slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.

Audio & transcript: https://theinsideview.ai/roblong.
Michaël: https://twitter.com/MichaelTrazzi.
Robert: https://twitter.com/rgblong.

Robert’s blog: https://experiencemachines.substack.com.

OUTLINE
00:00:00 Intro.
00:01:11 The LaMDA Controversy.
00:07:06 Defining AGI And Consciousness.
00:10:30 The Slightly Conscious Tweet.
00:13:16 Could Large Language Models Become Conscious?
00:18:03 Blake Lemoine Does Not Negotiate With Terrorists.
00:25:58 Could We Actually Test Artificial Consciousness?
00:29:33 From Metaphysics To Illusionism.
00:35:30 How We Could Decide On The Moral Patienthood Of Language Models.
00:42:00 Predictive Processing, Global Workspace Theories and Integrated Information Theory.
00:49:46 Have You Tried DMT?
00:51:13 Is Valence Just The Reward in Reinforcement Learning?
00:54:26 Are Pain And Pleasure Symetrical?
01:04:25 From Charismatic AI Systems to Artificial Sentience.
01:15:07 Sharing The World With Digital Minds.
01:24:33 Why AI Alignment Is More Pressing Than Artificial Sentience.
01:39:48 Why Moral Personhood Could Require Memory.
01:42:41 Last thoughts And Further Readings.

AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI

I have a question for you that seems to be garnering a lot of handwringing and heated debates these days. Are you ready? Will humans outlive AI? Mull that one over. I am going to unpack the question and examine closely the answers and how the answers have been elucidated. My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics.


A worthy question is whether humans will outlive AI, though the worthiness of the question is perhaps different than what you think it is. All in all, important AI Ethics ramifications arise.

William MacAskill: ‘There are 80 trillion people yet to come. They need us to start protecting them’

All those numbers seem incalculably abstract but, according to the moral philosopher William MacAskill, they should command our attention. He is a proponent of what’s known as longtermism – the view that the deep future is something we have to address now. How long we last as a species and what kind of state of wellbeing we achieve, says MacAskill, may have a lot to do with what decisions we make and actions we take at the moment and in the foreseeable future.

That, in a nutshell, is the thesis of his new book, What We Owe the Future: A Million-Year View. The Dutch historian and writer Rutger Bregman calls the book’s publication “a monumental event”, while the US neuroscientist Sam Harris says that “no living philosopher has had a greater impact” upon his ethics.

We tend to think of moral philosophers as whiskery sages, but MacAskill is a youthful 35 and a disarmingly informal character in person, or rather on a Zoom call from San Francisco, where he is promoting the book.

How Scientists Revived Organs in Pigs an Hour After They Died

Yes, it does. Although OrganEx helps revitalize pigs’ organs, it’s far from a deceased animal being brought back to life. Rather, their organs were better protected from low oxygen levels, which occur during heart attacks or strokes.

“One could imagine that the OrganEx system (or components thereof) might be used to treat such people in an emergency,” said Porte.

The technology could also help preserve donor organs, but there’s a long way to go. To Dr. Brendan Parent, director of transplant ethics and policy research at NYU Grossman School of Medicine, OrganEx may force a rethink for the field. For example, is it possible that someone could have working peripheral organs but never regain consciousness? As medical technology develops, death becomes a process, not a moment.

Can we make the future a million years from now go better?

You can buy What We Owe the Future here: https://www.basicbooks.com/titles/william-macaskill/what-we-…541618626/

In his new book about longtermism, What We Owe the Future, the philosopher William MacAskill argues that concern for the long-term future should be a key moral priority of our time. There are three central claims that justify this view. 1. Future people matter. 2. There could be a lot of them. 3. We can make their lives go better. In this video, we focus on the third claim.

We’ve had the opportunity to read What We Owe the Future in advance thanks to the Forethought Foundation. They reached out asking if we could make a video on the occasion of the book launch. We were happy to collaborate, to help spread the ideas of the longtermist philosophy as far as possible smile

Interested in donating to safeguard the long-term future of humanity? You can donate to an expert managed fund at: https://www.givingwhatwecan.org/charities/longtermism-fund.

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🟠 Patreon: https://www.patreon.com/rationalanimations.

/* */