One of the leading AI companies is funding academic research into algorithms that can predict humans’ moral judgements.
Category: ethics
Scientists raise the alarm following updated research ethics guidelines on heritable human genome editing.
—————–Support the channel———–
Patreon: https://www.patreon.com/thedissenter.
PayPal: paypal.me/thedissenter.
——————Follow me on——————–
Facebook: https://www.facebook.com/thedissenteryt/
Twitter: https://twitter.com/TheDissenterYT
Dr. Alexander Rosenberg is the R. Taylor Cole Professor of Philosophy at Duke University. He has been a visiting professor and fellow at the Center for the Philosophy of Science, at the University of Minnesota, as well as the University of California, Santa Cruz, and Oxford University, and a visiting fellow of the Philosophy Department at the Research School of Social Science, of the Australian National University. In 2016 he was the Benjamin Meaker Visiting Professor at the University of Bristol. He has held fellowships from the National Science Foundation, the American Council of Learned Societies, and the John Simon Guggenheim Foundation. In 1993, Dr. Rosenberg received the Lakatos Award in the philosophy of science. In 2006–2007 he held a fellowship at the National Humanities Center. He was also the Phi Beta Kappa-Romanell Lecturer for 2006–2007. He’s the author of both fictional and non-fictional literature, including The Atheist’s Guide to Reality, The Girl from Krakow, and How History Gets Things Wrong.
In this episode, we focus on Dr. Rosenberg’s most recent book, How History Gets Things Wrong, and also a little bit on some of the topics of The Atheist’s Guide to Reality. We talk about the theory of mind, and how it evolved; the errors with narrative History, and the negative consequences it might produce; mind-brain dualism; what neuroscience tells us about how our brain and cognition operate; social science, biology, and evolution; the role that evolutionary game theory can play in explaining historical events and social phenomena; why beliefs, motivations, desires, and other mental constructs might not exist at all, and the implications for moral philosophy; if AI could develop these same illusions; and nihilism.
Time Links:
The 3 Body Problem Explored: Cosmic Sociology, Longtermism & Existential Risk — round table discussion with three great minds: Robin Hanson, Anders Sandberg and Joscha Bach — moderated by Adam Ford (SciFuture) and James Hughes (IEET).
Some of the items discussed:
- How can narratives that keep people engaged avoid falling short of being realistic?
- In what ways is AI superintelligence kept of stage to allow a narrative that is familiar and easier to make sense of?
- Differences in moral perspectives — moral realism, existentialism and anti-realism.
- Will values of advanced civilisations converge to a small number of possibilities, or will they vary greatly?
- How much will competition be the dominant dynamic in the future, compared to co-ordination?
- In a competitive dynamic, will defense or offense be the most dominant strategy?
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Michael Levin: What is Synthbiosis? Diverse Intelligence Beyond AI & The Space of Possible Minds
Posted in bioengineering, biotech/medical, cyborgs, education, ethics, genetics, information science, robotics/AI | Leave a Comment on Michael Levin: What is Synthbiosis? Diverse Intelligence Beyond AI & The Space of Possible Minds
Michael Levin is a Distinguished Professor in the Biology department at Tufts University and associate faculty at the Wyss Institute for Bioinspired Engineering at Harvard University. @drmichaellevin holds the Vannevar Bush endowed Chair and serves as director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. Prior to college, Michael Levin worked as a software engineer and independent contractor in the field of scientific computing. He attended Tufts University, interested in artificial intelligence and unconventional computation. To explore the algorithms by which the biological world implemented complex adaptive behavior, he got dual B.S. degrees, in CS and in Biology and then received a PhD from Harvard University. He did post-doctoral training at Harvard Medical School, where he began to uncover a new bioelectric language by which cells coordinate their activity during embryogenesis. His independent laboratory develops new molecular-genetic and conceptual tools to probe large-scale information processing in regeneration, embryogenesis, and cancer suppression.
TIMESTAMPS:
0:00 — Introduction.
1:41 — Creating High-level General Intelligences.
7:00 — Ethical implications of Diverse Intelligence beyond AI & LLMs.
10:30 — Solving the Fundamental Paradox that faces all Species.
15:00 — Evolution creates Problem Solving Agents & the Self is a Dynamical Construct.
23:00 — Mike on Stephen Grossberg.
26:20 — A Formal Definition of Diverse Intelligence (DI)
30:50 — Intimate relationships with AI? Importance of Cognitive Light Cones.
38:00 — Cyborgs, hybrids, chimeras, & a new concept called “Synthbiosis“
45:51 — Importance of the symbiotic relationship between Science & Philosophy.
53:00 — The Space of Possible Minds.
58:30 — Is Mike Playing God?
1:02:45 — A path forward: through the ethics filter for civilization.
1:09:00 — Mike on Daniel Dennett (RIP)
1:14:02 — An Ethical Synthbiosis that goes beyond “are you real or faking it“
1:25:47 — Conclusion.
EPISODE LINKS:
- Mike’s Round 1: https://youtu.be/v6gp-ORTBlU
- Mike’s Round 2: https://youtu.be/kMxTS7eKkNM
- Mike’s Channel: https://www.youtube.com/@drmichaellevin.
- Mike’s Website: https://drmichaellevin.org/
- Blog Website: https://thoughtforms.life.
- Mike’s Twitter: https://twitter.com/drmichaellevin.
- Mike’s Publications: https://scholar.google.com/citations?user=luouyakAAAAJ&hl=en.
- Mike’s NOEMA piece: https://www.noemamag.com/ai-could-be-a-bridge-toward-diverse-intelligence/
- Stephen Grossberg: https://youtu.be/bcV1eSgByzg.
- Mark Solms: https://youtu.be/rkbeaxjAZm4
- VPRO Roundtable: https://youtu.be/RVrnn7QW6Jg?feature=shared.
CONNECT:
R.I.P. Phil Philip George Zimbardo. March 23, 1933 – October 14, 2024.
“Success is not about reaching a destination; it’s about the journey and the person you become along the way.”
Philip G. Zimbardo, one of the world’s most renowned psychologists, died Oct. 14 in his home in San Francisco. He was 91.
Broadly, Zimbardo’s research explored how environments influence behavior. He is most known for his controversial 1971 study, the Stanford Prison Experiment, with W. Curtis Banks, Craig Haney, and David Jaffe. The study, intended to examine the psychological experiences of imprisonment, revealed the shocking extent to which circumstances can alter individual behavior. To this day, it is used as a case study in psychology classes to highlight both the psychology of evil as well as the ethics of doing psychological research with human subjects.
The pace of engineering and science is speeding up, rapidly leading us toward a “Technological Singularity” — a point in time when superintelligent machines achieve and improve so much so fast, traditional humans can no longer operate at the forefront. However, if all goes well, human beings may still flourish greatly in their own ways in this unprecedented era.
If humanity is going to not only survive but prosper as the Singularity unfolds, we will need to understand that the Technological Singularity is an Experiential Singularity as well, and rapidly evolve not only our technology but our level of compassion, ethics and consciousness.
The aim of The Consciousness Explosion is to help curious and open-minded readers wrap their brains around these dramatic emerging changes– and empower readers with tools to cope and thrive as they unfold.
Mental health issues are one of the most common causes of disability, affecting more than a billion people worldwide. Addressing mental health difficulties can present extraordinarily tough problems: what can providers do to help people in the most precarious situations? How do changes in the physical brain affect our thoughts and experiences? And at the end of the day, how can everyone get the care they need?
Answering those questions was the shared goal of the researchers who attended the Mental Health, Brain, and Behavioral Science Research Day in September. While the problems they faced were serious, the new solutions they started to build could ultimately help improve mental health care at individual and societal levels.
“We’re building something that there’s no blueprint for,” said Mark Rapaport, MD, CEO of Huntsman Mental Health Institute at the University of Utah. “We’re developing new and durable ways of addressing some of the most difficult issues we face in society.”
In this special crossover episode of The Cognitive Revolution, Nathan Labenz joins Robert Wright of the Nonzero newsletter and podcast to explore pressing questions about AI development. They discuss the nature of understanding in large language models, multimodal AI systems, reasoning capabilities, and the potential for AI to accelerate scientific discovery. The conversation also covers AI interpretability, ethics, open-sourcing models, and the implications of US-China relations on AI development.
Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/J…
RECOMMENDED PODCAST: History 102
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth — in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on.
Spotify: https://open.spotify.com/show/36Kqo3B…
Apple: https://podcasts.apple.com/us/podcast…
YouTube: / @history102-qg5oj.
SPONSORS:
Honesty is the best policy… most of the time. Social norms help humans understand when we need to tell the truth and when we shouldn’t, to spare someone’s feelings or avoid harm. But how do these norms apply to robots, which are increasingly working with humans? To understand whether humans can accept robots telling lies, scientists asked almost 500 participants to rate and justify different types of robot deception.
“I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” said Andres Rosero, Ph.D. candidate at George Mason University and lead author of the study in Frontiers in Robotics and AI. “With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.”