Toggle light / dark theme

Dr. Isaac Asimov was a prolific science fiction author, biochemist, and professor. He was best known for his works of science fiction and for his popular science essays. Born in Russia in 1920 and brought to the United States by his family as a young child, he went on to become one of the most influential figures in the world of speculative fiction. He wrote hundreds of books on a variety of topics, but he’s especially remembered for series like the “Foundation” series and the “Robot” series.
Asimov’s science fiction often dealt with themes and ideas that pertained to the future of humanity.

The “Foundation” series for example, introduced the idea of “psychohistory” – a mathematical way of predicting the future based on large population behaviors. While we don’t have psychohistory as described by Asimov, his works did reflect the belief that societies operate on understandable and potentially predictable principles.

Asimov’s “Robot” series introduced the world to the Three Laws of Robotics, which are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws have been influential in discussions about robot ethics and the future of AI, even though they are fictional constructs.

Like many futurists and speculative authors, Asimov’s predictions were a mix of hits and misses.

The Collective Intelligence of Cells During Morphogenesis: What Bioelectricity Outside the Brain Means for Understanding our Multiscale Nature with Michael Levin — Incredible Minds.

Recorded: April 29, 2023.

Each of us takes a remarkable journey from physics to mind: we start as a blob of chemicals in an unfertilized quiescent oocyte and becomes a complex, metacognitive human being. The continuous process of transformation and emergence that we see in developmental biology reminds us that we are true collective intelligences – composed of cells which used to be individual organisms themselves. In this talk, I will describe our work on understanding how the competencies of single cells are harnessed to solve problems in anatomical space, and how evolution pivoted this scaling of intelligence into the familiar forms of cognition in the nervous system. We will talk about diverse intelligence in novel embodiments, the scaling of the cognitive light cone of all beings, and the role of developmental bioelectricity as a cognitive glue and as the interface by which mind controls matter in the body. I will also show a new synthetic life form, and discuss what it means for bioengineering and ethics of human relationships to the wider world of possible beings. We will discuss the implications of these ideas for understanding evolution, and the applications we have developed in birth defects, cancer, and traumatic injury repair. By merging deep ideas from developmental biophysics, computer science, and cognitive science, we not only get a new perspective on fundamental questions of life and mind, but also new roadmaps in regenerative medicine, biorobotics, and AI.

Michael Levin received dual undergraduate degrees in computer science and biology, followed by a PhD in molecular genetics from Harvard. He did his post-doctoral training at Harvard Medical School, and started his independent lab in 2000. He is currently the Vannevar Bush chair at Tufts University, and an associate faculty member of the Wyss Institute at Harvard. He serves as the founding director of the Allen Discovery Center at Tufts. His lab uses a mix of developmental biophysics, computer science, and behavior science to understand the emergence of mind in unconventional embodiments at all scales, and to develop interventions in regenerative medicine and applications in synthetic bioengineering. They can be found at www.drmichaellevin.org/

As we plunge head-on into the game-changing dynamic of general artificial intelligence, observers are weighing in on just how huge an impact it will have on global societies. Will it drive explosive economic growth as some economists project, or are such claims unrealistically optimistic?

Few question the potential for change that AI presents. But in a world of litigation, and ethical boundaries, will AI be able to thrive?

Two researchers from Epoch, a research group evaluating the progression of artificial intelligence and its potential impacts, decided to explore arguments for and against the likelihood that innovation ushered in by AI will lead to explosive growth comparable to the Industrial Revolution of the 18th and 19th centuries.

With the growth of the commercial spaceflight business comes new ethical issues about human experimentation.

The expansion of the commercial spaceflight sector opens new avenues for scientific study in the unique environment of space.

However, it also raises ethical concerns about the conduct of scientific experiments and studies involving human volunteers on commercial spaceflights.

Let’s say that it is a curse. The issue is he is also against life extension entirely. Maybe I want 200 years. Or 1,000. I have zero concern over a boredom problem as it is brain process which can eventually be controlled. And I am disgusted with the idea that I have to die because we might not progress very fast? Ugh.


Elon Musk has said a lot of potentially stupid stuff about aging and longevity, from saying that people shouldn’t live very long because society would ossify to advocating that we judge people based on their chronological age. Most recently, he’s taken to Twitter (aka X) to say “May you live forever is the worst possible curse once you understand deep time.” In this case though, he’s not wrong.

In this episode, we explore the diverse perspectives and heated debates triggered by Elon’s provocative statements on aging and the prospect of eternal life. We navigate through the complexities of deep time, the philosophical implications of living forever, and the importance of autonomy and control. Join host Ryan O’Shea as we examine arguments in favor of human’s being able to end their own lives, and explore how this played out in NBC’s The Good Place, starting Kristen Bell.

Summary: The revolutionary field of bio-computing is making waves as DishBrain, a neural system combining 800,000 living brain cells, learns to play Pong. Recognizing the pressing need for ethical guidelines in this emerging domain, the pioneers behind DishBrain have joined forces with bioethicists in a study.

The research explores the moral considerations around biological computing systems and their potential consciousness. Beyond its innovation, the technology offers vast environmental benefits, potentially transforming the energy-consuming IT industry.

A blog by the company’s chief trust officer, Dana Rao, highlighted the importance of ethics while developing new AI tools. Generative AI has come much more into mainstream awareness in recent months with the launch of ChatGPT and DALL-E systems, which can understand written and verbal requests to generate convincingly human-like text and images.

Artists have complained that generative AIs being trained on their work is tantamount to ‘ripping off’ their styles or creating discriminatory or explicit content from harmless inputs. Others have called into question the ease with which humans can pass off AI-generated prose as their own work.

Read more: Schools ban ChatGPT.

Every day we encounter circumstances we consider wrong: a starving child, a corrupt politician, an unfaithful partner, a fraudulent scientist. These examples highlight several moral issues, including matters of care, fairness and betrayal. But does anything unite them all?

Philosophers, psychologists and neuroscientists have passionately argued whether moral judgments share something distinctive that separates them from non-moral matters. Moral monists claim that morality is unified by a common characteristic and that all moral issues involve concerns about harm. Pluralists, in contrast, argue that moral judgments are more diverse in nature.

Fascinated by this centuries-old debate, a team of researchers set out to probe the nature of morality using one of moral psychology’s most prolific theories. The group, led by UC Santa Barbara’s René Weber, intensively studied 64 individuals via surveys, interviews and brain imaging on the wrongness of various behaviors.