Toggle light / dark theme

This spring, the Hastings Center Report added a new series of essays named after the field its pieces aim to explore. Neuroscience and Society produces open access articles and opinion pieces that address the ethical, legal, and societal issues presented by emerging neuroscience. The series will run roughly twice a year and was funded by the Dana Foundation to foster dynamic, sustained conversation among neuroscience researchers, legal and ethics scholars, policymakers, and wider publics.

The first edition of the series focuses on the topic of research studies and what is owed to people who volunteer to participate in clinical trials to develop implantable brain devices, such as deep-brain stimulators and brain-computer interfaces.

Imagine you have lived with depression for most of your life. Despite trying numerous medications and therapies, such as electroconvulsive therapy, you have not been able to manage your symptoms effectively. Your depression keeps you from maintaining a job, interacting with your friends and family, and generally prevents you from flourishing as a person.

A recent study revealed that when individuals are given two solutions to a moral dilemma, the majority tend to prefer the answer provided by artificial intelligence (AI) over that given by another human.

The recent study, which was conducted by Eyal Aharoni, an associate professor in Georgia State’s Psychology Department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) which came onto the scene last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni said. “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse. So, if we want to use these tools, we should understand how they operate, their limitations, and that they’re not necessarily operating in the way we think when we’re interacting with them.”

Summary: People often view AI-generated answers to ethical questions as superior to those from humans. In the study, participants rated responses from AI and humans without knowing the source, and overwhelmingly favored the AI’s responses in terms of virtuousness, intelligence, and trustworthiness.

This modified moral Turing test, inspired by ChatGPT and similar technologies, indicates that AI might convincingly pass a moral Turing test by exhibiting complex moral reasoning. The findings highlight the growing influence of AI in decision-making processes and the potential implications for societal trust in technology.

According to files accessed by journalist Jack Poulson, Microsoft presented OpenAI’s DALL-E as a tool to conduct Advanced Computer Vision Training of Battle Management Systems (BMS).

A BMS is a software suite that provides military leaders with an overview of a combat situation and helps them plan troop movements, artillery fire, and air strike targets. According to Microsoft’s presentation, the DALL-E tool could generate artificial images and train BMS to visualize the ground situation better and identify appropriate strike targets.

Marshal Brain’s 2003 book Manna was quite ahead of its time in foreseeing that eventually, one way or another, we will have to confront and address the phenomenon of technological unemployment. In addition, Marshall is a passionate brainiac with a jovial personality, impressive background and a unique perspective. And so I was very happy to meet him in person for an exclusive interview. [Special thanks to David Wood without whose introduction this interview may not have happened!]

During our 82 min conversation with Marshall Brain we cover a variety of interesting topics such as: his books The Second Intelligent Species and Manna; AI as the end game for humanity; using cockroaches as a metaphor; logic and ethics; simulating the human brain; the importance of language and visual processing for the creating of AI; marrying off Siri to Watson; technological unemployment, social welfare and perpetual vacation; capitalism, socialism and the need for systemic change

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Nobel Prize-winning molecular biologist Venki Ramakrishnan sat down with ABC News Live to discuss the science and ethics of extending the human lifespan.

In his new book, “Why We Die: The New Science of Aging and the Quest for Immortality,” Ramakrishnan explains why we may not want to lengthen our lives much longer.

Ramakrishnan’s thought-provoking argument is that a society where people lived for hundreds of years could potentially become stagnant, as it would consist of the same group of people living longer, raising important questions about societal dynamics and progress.