Toggle light / dark theme

What is limb regeneration and what species possess it? How is it achieved? What does this tell us about intelligence in biological systems and how could this information be exploited to develop human therapeutics? Well, in this video, we discuss many of these topics with Dr Michael Levin, Principal Investigator at Tufts University, whose lab studies anatomical and behavioural decision-making at multiple scales of biological, artificial, and hybrid systems.

Find Michael on Twitter — https://twitter.com/drmichaellevin.

Find me on Twitter — https://twitter.com/EleanorSheekey.

Support the channel.
through PayPal — https://paypal.me/sheekeyscience?country.x=GB&locale.x=en_GB
through Patreon — https://www.patreon.com/TheSheekeyScienceShow.

TIMESTAMPS:
Intro — 00:00
Regeneration & Evolution — 01:30
Regrowing Limbs and Wearable Bioreactors — 09:30
Human regenerative medicine approaches — 19:20
Bioelectricity — 24:00
Problem solving in morphological space — 40:45
Where is memory stored — 44:30
Intelligence — 50:30
Xenobots & synthetic living machines — 56:00
Collective intelligence & ethics — 1:10:00
Future of human species — 1:17:00
Advice — 1:24:00

Please note that The Sheekey Science Show is distinct from Eleanor Sheekey’s teaching and research roles at the University of Cambridge. The information provided in this show is not medical advice, nor should it be taken or applied as a replacement for medical advice. The Sheekey Science Show and guests assume no liability for the application of the information discussed.

An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or “carebots” used in healthcare settings.

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, and other people who require health monitoring or physical assistance,” says Veljko Dubljević, corresponding author of a paper on the work and an associate professor in the Science, Technology & Society program at North Carolina State University. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.”

“For example, let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”

So how should companies begin to think about ethical data management? What measures can they put in place to ensure that they are using consumer, patient, HR, facilities, and other forms of data appropriately across the value chain—from collection to analytics to insights?

We began to explore these questions by speaking with about a dozen global business leaders and data ethics experts. Through these conversations, we learned about some common data management traps that leaders and organizations can fall into, despite their best intentions. These traps include thinking that data ethics does not apply to your organization, that legal and compliance have data ethics covered, and that data scientists have all the answers—to say nothing of chasing short-term ROI at all costs and looking only at the data rather than their sources.

In this article, we explore these traps and suggest some potential ways to avoid them, such as adopting new standards for data management, rethinking governance models, and collaborating across disciplines and organizations. This list of potential challenges and remedies is not exhaustive; our research base was relatively small, and leaders could face many other obstacles, beyond our discussion here, to the ethical use of data. But what’s clear from our research is that data ethics needs both more and sustained attention from all members of the C-suite, including the CEO.

Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life Institute.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

For more information on the BAI ‘17 Conference:

AI-principles

https://futureoflife.org/bai-2017/

A Principled AI Discussion in Asilomar

Yoshua Bengio (MILA), Irina Higgins (DeepMind), Nick Bostrom (FHI), Yi Zeng (Chinese Academy of Sciences), and moderator Joshua Tenenbaum (MIT) discuss possible paths to artificial general intelligence.

The Beneficial AGI 2019 Conference: https://futureoflife.org/beneficial-agi-2019/

After our Puerto Rico AI conference in 2015 and our Asilomar Beneficial AI conference in 2017, we returned to Puerto Rico at the start of 2019 to talk about Beneficial AGI. We couldn’t be more excited to see all of the groups, organizations, conferences and workshops that have cropped up in the last few years to ensure that AI today and in the near future will be safe and beneficial. And so we now wanted to look further ahead to artificial general intelligence (AGI), the classic goal of AI research, which promises tremendous transformation in society. Beyond mitigating risks, we want to explore how we can design AGI to help us create the best future for humanity.

We again brought together an amazing group of AI researchers from academia and industry, as well as thought leaders in economics, law, policy, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day technical workshop to look more deeply at how we can create beneficial AGI, and we followed that with a 2.5-day conference, in which people from a broader AI background considered the opportunities and challenges related to the future of AGI and steps we can take today to move toward an even better future.

This lecture was recorded on February 3, 2003 as part of the Distinguished Science Lecture Series hosted by Michael Shermer and presented by The Skeptics Society in California (1992–2015).

Can there be freedom and free will in a deterministic world? Renowned philosopher and public intellectual, Dr. Dennett, drawing on evolutionary biology, cognitive neuroscience, economics and philosophy, demonstrates that free will exists in a deterministic world for humans only, and that this gives us morality, meaning, and moral culpability. Weaving a richly detailed narrative, Dennett explains in a series of strikingly original arguments that far from being an enemy of traditional explorations of freedom, morality, and meaning, the evolutionary perspective can be an indispensable ally. In Freedom Evolves, Dennett seeks to place ethics on the foundation it deserves: a realistic, naturalistic, potentially unified vision of our place in nature.

Dr. Daniel Dennett — Freedom Evolves: Free Will, Determinism, and Evolution

Watch some of the past lectures for free online.
https://www.skeptic.com/lectures/

SUPPORT THE SOCIETY

You play a vital part in our commitment to promote science and reason. If you enjoy watching the Distinguished Science Lecture Series, please show your support by making a donation, or by becoming a patron. Your ongoing patronage will help ensure that sound scientific viewpoints are heard around the world.

This mini documentary takes a look at Elon Musk and his thoughts on artificial intelligence. Giving examples on how it is being used today — from Tesla cars to Facebook, and Instagram. And what the future of artificial intelligence has in store for us — from the risks and Elon Musk’s Neuralink chip to robotics.

The video will also go over the fundamentals and basics of artificial intelligence and machine learning. Breaking down AI for beginners into simple terms — showing what is ai, along with neural nets.

Elon Musk’s Book Recommendations (Affiliate Links)
• Life 3.0: Being Human in the Age of Artificial Intelligence https://amzn.to/3790bU1

• Our Final Invention: Artificial Intelligence and the End of the Human Era.
https://amzn.to/351t9Ta.

• Superintelligence: Paths, Dangers, Strategies.
https://amzn.to/3j28WkP

Other topics covered in this video include:

David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy.

Topics discussed in this episode include:

-Virtual reality as genuine reality.
–Why VR is compatible with the good life.
–Why we can never know whether we’re in a simulation.
–Consciousness in virtual realities.
–The ethics of simulated beings.

You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-realit…hilosophy/

Listen to the audio version of this episode here: https://podcasts.apple.com/us/podcast/david-chalmers-on-real…0549092329

Check out David’s book and website here: http://consc.net/

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever’s slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.

Audio & transcript: https://theinsideview.ai/roblong.
Michaël: https://twitter.com/MichaelTrazzi.
Robert: https://twitter.com/rgblong.

Robert’s blog: https://experiencemachines.substack.com.

OUTLINE
00:00:00 Intro.
00:01:11 The LaMDA Controversy.
00:07:06 Defining AGI And Consciousness.
00:10:30 The Slightly Conscious Tweet.
00:13:16 Could Large Language Models Become Conscious?
00:18:03 Blake Lemoine Does Not Negotiate With Terrorists.
00:25:58 Could We Actually Test Artificial Consciousness?
00:29:33 From Metaphysics To Illusionism.
00:35:30 How We Could Decide On The Moral Patienthood Of Language Models.
00:42:00 Predictive Processing, Global Workspace Theories and Integrated Information Theory.
00:49:46 Have You Tried DMT?
00:51:13 Is Valence Just The Reward in Reinforcement Learning?
00:54:26 Are Pain And Pleasure Symetrical?
01:04:25 From Charismatic AI Systems to Artificial Sentience.
01:15:07 Sharing The World With Digital Minds.
01:24:33 Why AI Alignment Is More Pressing Than Artificial Sentience.
01:39:48 Why Moral Personhood Could Require Memory.
01:42:41 Last thoughts And Further Readings.

I have a question for you that seems to be garnering a lot of handwringing and heated debates these days. Are you ready? Will humans outlive AI? Mull that one over. I am going to unpack the question and examine closely the answers and how the answers have been elucidated. My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics.


A worthy question is whether humans will outlive AI, though the worthiness of the question is perhaps different than what you think it is. All in all, important AI Ethics ramifications arise.