Toggle light / dark theme

This lecture was recorded on February 3, 2003 as part of the Distinguished Science Lecture Series hosted by Michael Shermer and presented by The Skeptics Society in California (1992–2015).

Can there be freedom and free will in a deterministic world? Renowned philosopher and public intellectual, Dr. Dennett, drawing on evolutionary biology, cognitive neuroscience, economics and philosophy, demonstrates that free will exists in a deterministic world for humans only, and that this gives us morality, meaning, and moral culpability. Weaving a richly detailed narrative, Dennett explains in a series of strikingly original arguments that far from being an enemy of traditional explorations of freedom, morality, and meaning, the evolutionary perspective can be an indispensable ally. In Freedom Evolves, Dennett seeks to place ethics on the foundation it deserves: a realistic, naturalistic, potentially unified vision of our place in nature.

Dr. Daniel Dennett — Freedom Evolves: Free Will, Determinism, and Evolution

Watch some of the past lectures for free online.

This mini documentary takes a look at Elon Musk and his thoughts on artificial intelligence. Giving examples on how it is being used today — from Tesla cars to Facebook, and Instagram. And what the future of artificial intelligence has in store for us — from the risks and Elon Musk’s Neuralink chip to robotics.

The video will also go over the fundamentals and basics of artificial intelligence and machine learning. Breaking down AI for beginners into simple terms — showing what is ai, along with neural nets.

Elon Musk’s Book Recommendations (Affiliate Links)
• Life 3.0: Being Human in the Age of Artificial Intelligence https://amzn.to/3790bU1

• Our Final Invention: Artificial Intelligence and the End of the Human Era.

David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy.

Topics discussed in this episode include:

-Virtual reality as genuine reality.
–Why VR is compatible with the good life.
–Why we can never know whether we’re in a simulation.
–Consciousness in virtual realities.
–The ethics of simulated beings.

You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-realit…hilosophy/

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever’s slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.

Audio & transcript: https://theinsideview.ai/roblong.
Michaël: https://twitter.com/MichaelTrazzi.
Robert: https://twitter.com/rgblong.

Robert’s blog: https://experiencemachines.substack.com.

OUTLINE

I have a question for you that seems to be garnering a lot of handwringing and heated debates these days. Are you ready? Will humans outlive AI? Mull that one over. I am going to unpack the question and examine closely the answers and how the answers have been elucidated. My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics.


A worthy question is whether humans will outlive AI, though the worthiness of the question is perhaps different than what you think it is. All in all, important AI Ethics ramifications arise.

All those numbers seem incalculably abstract but, according to the moral philosopher William MacAskill, they should command our attention. He is a proponent of what’s known as longtermism – the view that the deep future is something we have to address now. How long we last as a species and what kind of state of wellbeing we achieve, says MacAskill, may have a lot to do with what decisions we make and actions we take at the moment and in the foreseeable future.

That, in a nutshell, is the thesis of his new book, What We Owe the Future: A Million-Year View. The Dutch historian and writer Rutger Bregman calls the book’s publication “a monumental event”, while the US neuroscientist Sam Harris says that “no living philosopher has had a greater impact” upon his ethics.

We tend to think of moral philosophers as whiskery sages, but MacAskill is a youthful 35 and a disarmingly informal character in person, or rather on a Zoom call from San Francisco, where he is promoting the book.

Yes, it does. Although OrganEx helps revitalize pigs’ organs, it’s far from a deceased animal being brought back to life. Rather, their organs were better protected from low oxygen levels, which occur during heart attacks or strokes.

“One could imagine that the OrganEx system (or components thereof) might be used to treat such people in an emergency,” said Porte.

The technology could also help preserve donor organs, but there’s a long way to go. To Dr. Brendan Parent, director of transplant ethics and policy research at NYU Grossman School of Medicine, OrganEx may force a rethink for the field. For example, is it possible that someone could have working peripheral organs but never regain consciousness? As medical technology develops, death becomes a process, not a moment.

You can buy What We Owe the Future here: https://www.basicbooks.com/titles/william-macaskill/what-we-…541618626/

In his new book about longtermism, What We Owe the Future, the philosopher William MacAskill argues that concern for the long-term future should be a key moral priority of our time. There are three central claims that justify this view. 1. Future people matter. 2. There could be a lot of them. 3. We can make their lives go better. In this video, we focus on the third claim.

We’ve had the opportunity to read What We Owe the Future in advance thanks to the Forethought Foundation. They reached out asking if we could make a video on the occasion of the book launch. We were happy to collaborate, to help spread the ideas of the longtermist philosophy as far as possible smile

Interested in donating to safeguard the long-term future of humanity? You can donate to an expert managed fund at: https://www.givingwhatwecan.org/charities/longtermism-fund.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching.


AI Asymmetry is getting larger and worsening, particularly via the advent of fully autonomous systems, and for which society needs to be aware of and considering devising remedies such as arming more with AI to essentially fight fire with fire.