Toggle light / dark theme

Mental health issues are one of the most common causes of disability, affecting more than a billion people worldwide. Addressing mental health difficulties can present extraordinarily tough problems: what can providers do to help people in the most precarious situations? How do changes in the physical brain affect our thoughts and experiences? And at the end of the day, how can everyone get the care they need?

Answering those questions was the shared goal of the researchers who attended the Mental Health, Brain, and Behavioral Science Research Day in September. While the problems they faced were serious, the new solutions they started to build could ultimately help improve mental health care at individual and societal levels.

“We’re building something that there’s no blueprint for,” said Mark Rapaport, MD, CEO of Huntsman Mental Health Institute at the University of Utah. “We’re developing new and durable ways of addressing some of the most difficult issues we face in society.”

In this special crossover episode of The Cognitive Revolution, Nathan Labenz joins Robert Wright of the Nonzero newsletter and podcast to explore pressing questions about AI development. They discuss the nature of understanding in large language models, multimodal AI systems, reasoning capabilities, and the potential for AI to accelerate scientific discovery. The conversation also covers AI interpretability, ethics, open-sourcing models, and the implications of US-China relations on AI development.

Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/J

RECOMMENDED PODCAST: History 102
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth — in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on.
Spotify: https://open.spotify.com/show/36Kqo3B
Apple: https://podcasts.apple.com/us/podcast
YouTube: / @history102-qg5oj.

SPONSORS:
Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive.

The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Honesty is the best policy… most of the time. Social norms help humans understand when we need to tell the truth and when we shouldn’t, to spare someone’s feelings or avoid harm. But how do these norms apply to robots, which are increasingly working with humans? To understand whether humans can accept robots telling lies, scientists asked almost 500 participants to rate and justify different types of robot deception.

“I wanted to explore an understudied facet of ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” said Andres Rosero, Ph.D. candidate at George Mason University and lead author of the study in Frontiers in Robotics and AI. “With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.”

Mind uploading and digital immortality explore the potential of AI technology to enable humans to live forever by transferring consciousness to machines. This concept raises profound questions about the future of humanity, identity, and ethics. Discover the groundbreaking possibilities and challenges of achieving eternal life through artificial intelligence and digital consciousness.

#ai #mindupload

Join Randal Koene, a computational neuroscientist, as he dives into the intricate world of whole brain emulation and mind uploading, while touching on the ethical pillars of AI. In this episode, Koene discusses the importance of equal access to AI, data ownership, and the ethical impact of AI development. He explains the potential future of AGI, how current social and political systems might influence it, and touches on the scientific and philosophical aspects of creating a substrate-independent mind. Koene also elaborates on the differences between human cognition and artificial neural networks, the challenge of translating brain structure to function, and efforts to accelerate neuroscience research through structured challenges.

00:00 Introduction to Randal Koene and Whole Brain Emulation.
00:39 Ethical Considerations in AI Development.
02:20 Challenges of Equal Access and Data Ownership.
03:40 Impact of AGI on Society and Development.
05:58 Understanding Mind Uploading.
06:39 Randall’s Journey into Computational Neuroscience.
08:14 Scientific and Philosophical Aspects of Substrate Independent Minds.
13:07 Brain Function and Memory Processes.
25:34 Whole Brain Emulation: Current Techniques and Challenges.
32:12 The Future of Neuroscience and AI Collaboration.

SingularityNET is a decentralized marketplace for artificial intelligence. We aim to create the world’s global brain with a full-stack AI solution powered by a decentralized protocol.

We gathered the leading minds in machine learning and blockchain to democratize access to AI technology. Now anyone can take advantage of a global network of AI algorithms, services, and agents.

Website: https://singularitynet.io.
Discord: / discord.
Forum: https://community.singularitynet.io.
Telegram: https://t.me/singularitynet.
Twitter: / singularitynet.
Facebook: / singularitynet.io.
Instagram: / singularitynet.io.
Github: https://github.com/singnet.
Linkedin: / singularitynet.

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

Most of his stories, however, are less philosophically explicit. Lovecraft’s thought is often obscured in his tales, and must be pieced together from various sources, including his poetry, essays and, most importantly, his letters. Lovecraft wrote an estimated 100,000 during his life, of which around 10,000 have survived. Within this substantial non-fictional output, the volume of which dwarfs his fictional writing, Lovecraft expounded the philosophical concerns – whether metaphysical, ethical, political or aesthetic – which he claimed underpinned his weird fiction. These tales, he wrote, were based on one fundamental cosmic premise: ‘that common human laws and interests and emotions have no validity or significance in the vast cosmos-at-large’

In H P Lovecraft: The Decline of the West (1990), the scholar S T Joshi analysed many of those letters and essays to create an image of ‘Lovecraft the philosopher’. Joshi claimed that Lovecraft’s identity as a philosopher is a direct outcome of the genre he mastered: weird fiction. This genre, Joshi writes, is inherently philosophical because ‘it forces the reader to confront directly such issues as the nature of the universe and mankind’s place in it.’ Not everyone has agreed that Lovecraft’s thought should be so elevated. The Austrian literary critic Franz Rottensteiner, in a review of Joshi’s book, attacked the idea of Lovecraft as a philosopher: ‘The point is, of course, that Lovecraft as a thinker just wasn’t of any importance,’ he wrote ‘whether as a materialist, an aestheticist, or a moral philosopher.’

Recently, human brain organoids have raised increasing interest from scholars of many fields and a dynamic discussion in bioethics is ongoing. There is a serious concern that these in vitro models of brain development based on innovative methods for three-dimensional stem cell culture might deserve a specific moral status [1, 2]. This would especially be the case if these small stem cell constructs were to develop physiological features of organisms endowed with nervous systems, suggesting that they may be able to feel pain or develop some form of sentience or consciousness. Whether one wants to envision or discard the possibility of conscious brain organoids and whether one wants to acknowledge or dispute its moral relevance, the notion of consciousness is a main pillar of this discussion (even if not the only issue involved [3]). However, consciousness is itself a difficult notion, its nature and definition having been discussed for decades [4, 5]. As a consequence, the ethical debate surrounding brain organoids is deeply entangled with epistemological uncertainty pertaining to the conceptual underpinnings of the science of consciousness and its empirical endeavor.

It has been argued that neuroethics should circumvent this fundamental uncertainty by adhering to a precautionary principle [6]. Even if we do not know with certainty at which point brain organoids could become conscious, following some experimental design principles would ensure that the research does not raise any ethically problematic features in the years to come. It has also been proposed to redirect the inquiry to the “what-kind” issue (rather than the “whether or not” issue) in order to rely on more graspable features for ethical assessment [7]. These strategies, however, make the epistemological issue even more relevant. The question of whether or not current and future organoids can develop a certain form of consciousness (without presupposing what these different forms of consciousness might be) and how to assess this potentiality in existing biological systems is bound to stay with the field of brain organoid technology for a certain time. Even if it is not for advancing ethical issues, there is a theoretical interest in determining the boundary conditions of consciousness and its potential emergence in artificial entities. Although the methodological and knowledge gap is still wide between the research community on cellular biology and stem cell culture on the one side and the research community on consciousness such as cognitive neuroscience on the other, there will be more and more circulation of ideas and methods in the coming years. The results of this scientific endeavor will, in turn, impact ethics.

In this article, I look back at the history of consciousness research to find new perspectives on this contemporary epistemological conundrum. In particular, I suggest the distinction between “global” theories of consciousness and “local” theories of consciousness as a thought-provoking one for those engaged in the difficult task of adapting models of consciousness to the biological reality of brain organoids. The first section introduces the consciousness assessment issue as a general framework and a challenge for any discussion related to the putative consciousness of brain organoids. In the second section, I describe and critically assess the main attempt, so far, at solving the consciousness assessment issue relying on integrated information theory. In the third section, I propose to rely on the distinction between local and global theories of consciousness as a tool to navigate the theoretical landscape, before turning to the analysis of a notable local theory of consciousness, Semir Zeki’s theory of microconsciousness, in the fourth section. I conclude by drawing the epistemological and ethical lessons from this theoretical exploration.