Toggle light / dark theme

If a free-floating brain could feel pain or ‘wake up,’ how would we know? That’s an important ethical question — and it’s one we need to ask more often as labs around the world create new organoids, or miniature human organs. To answer it we talked to Jay Gopalakrishnan at his ‘mini brain’ lab for centrosome and cytoskeleton biology in Düsseldorf, Germany.

STUDY: https://www.cell.com/cell-stem-cell/fulltext/S1934-5909(21)00295-2

#brains #organoids #ethics #Germany #India.

More Science unscripted:

Derk Pereboom claims that free will is impossible because of its incompatibility with both determinism and indeterminism. Also he defends a robust nonreductive physicalism. It says that although consciousness can’t be reduced to physical it’s not something over and above physical.
The interview was taken by Vadim Vasiliev and Dmitry Volkov. Below you’ll see a list of questions of the interview.
1. The most influential books.
2. What are the differences between notions of moral responsibility and basic desert?
3. Which type of punishment should be eliminated if we find out that there is no justification for basic desert?
4. Is indignation as a reaction on wrongdoing a kind of irrational emotion?
5. How was the manipulation argument invented?
6. Why you’ve recently changed a presentation of the first case of Manipulation Argument?
7. How does the problem of free will relate to the problem of mental causation?
8. Could the problem of personal identity pose difficulties for moral responsibility and for basic desert? And why causal determinism is at the focus of free will debate?
9. Is there a real difference between hard incompatibilist’s position and that of compatibilists?
10. Can you list or name some differences and similarities between you and Daniel Dennett?
11. Could cognitive science and neuroscience eliminate the discussion on free will?
12. What is a definition of mental?
13. What were the most important changes of your views?
14. What is meaning of life?
15. What is your current research?

Brainoids — tiny clumps of human brain cells — are being turned into living artificial intelligence machines, capable of carrying out tasks like solving complex equations. The team finds out how these brain organoids compare to normal computer-based AIs, and they explore the ethics of it all.

Sickle cell disease is now curable, thanks to a pioneering trial with CRISPR gene editing. The team shares the story of a woman whose life has been transformed by the treatment.

We can now hear the sound of the afterglow of the big bang, the radiation in the universe known as the cosmic microwave background. The team shares the eerie piece that has been transposed for human ears, named by researchers The Echo of Eternity.

FallenKingdomReads’ list of The Top 5 Science Fiction Books That Explore the Ethics of Cloning.

Cloning is a topic that has been explored in science fiction for many years, often raising questions about the ethics of creating new life forms. While the idea of cloning has been discussed in various forms of media, such as movies and TV shows, some of the most interesting and thought-provoking discussions on the topic can be found in books. Here are the top 5 science fiction books that explore the ethics of cloning.

Alastair Reynolds’ House of Suns is a space opera that explores the ethics of cloning on a grand scale. The book follows the journey of a group of cloned human beings known as “shatterlings” who travel the galaxy and interact with various other sentient beings. The book raises questions about the nature of identity and the value of individuality, as the shatterlings face challenges that force them to confront their own existence and the choices they have made.

Microsoft laid off an entire team dedicated to guiding AI innovation that leads to ethical, responsible and sustainable outcomes. The cutting of the ethics and society team, as reported by Platformer, is part of a recent spate of layoffs that affected 10,000 employees across the company.

The elimination of the team comes as Microsoft invests billions more dollars into its partnership with OpenAI, the startup behind art-and text-generating AI systems like ChatGPT and DALL-E 2, and revamps its Bing search engine and Edge web browser to be powered by a new, next-generation large language model that is “more powerful than ChatGPT and customized specifically for search.”

The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream.

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50

The formula for rational thinking explained by Harvard professor Steven Pinker.

Up next, The war on rationality ► https://youtu.be/qdzNKQwkp-Y

In his explanation of Bayes’ theorem, cognitive psychologist Steven Pinker highlights how this type of reasoning can help us determine the degree of belief we assign to a claim based on available evidence.

Bayes’ theorem takes into account the prior probability of a claim, the likelihood of the evidence given the claim is true, and the commonness of the evidence regardless of the claim’s truth.

Even a couple of years ago, the idea that artificial intelligence might be conscious and capable of subjective experience seemed like pure science fiction. But in recent months, we’ve witnessed a dizzying flurry of developments in AI, including language models like ChatGPT and Bing Chat with remarkable skill at seemingly human conversation.

Given these rapid shifts and the flood of money and talent devoted to developing ever smarter, more humanlike systems, it will become increasingly plausible that AI systems could exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of real emotions and suffering, we face a potentially catastrophic moral dilemma: either give those systems rights, or don’t.

Experts are already contemplating the possibility. In February 2022, Ilya Sutskever, chief scientist at OpenAI, publicly pondered whether “today’s large neural networks are slightly conscious.” A few months later, Google engineer Blake Lemoine made international headlines when he declared that the computer language model, or chatbot, LaMDA might have real emotions. Ordinary users of Replika, advertised as “the world’s best AI friend,” sometimes report falling in love with it.