Toggle light / dark theme

Those who know Oxford University for its literary luminaries might be surprised to learn that some of the most important reflections on emerging technologies come from its hallowed halls. While the leading tech innovators in Silicon Valley capture imaginations with their bold visions of future singularities, mind-machine melding, and digital immortality by 2045, they rarely engage as deeply with the philosophical issues surrounding such developments as their like-minded scholars over the pond. This essay will briefly highlight some of the key contributions of Oxford University’s professors Nick Bostrom, Anders Sandberg, and Julian Savulescu to the transhumanist movement. It will also show how this movement’s focus on radical autonomy in biotechnical enhancements shapes the wider global bioethical conversation.

As the lead author of the Transhumanist FAQ, Bostrom provides the closest the movement has to an institutional catechism. He is, in a sense, the Ratzinger of Transhumanism. The first paragraph of the seminal text emphasizes the evolutionary vision of his school. Transhumanism’s incessant pursuit of radical technological transformation is “based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.” Current humans are but one intriguing yet greatly improvable iteration of human existence. Think of the first iPhone and how unattractive 2007’s most cutting-edge technology is in 2024.

In particular, transhumanists encourage radical physical, cognitive, mood, moral, and lifespan enhancements. The movement seeks to defeat humanity’s perennial enemies of aging, sickness, suffering, and death. Bostrom recognizes that he is facing the same foes as Christianity and other traditional religions. Yet he is confident that Transhumanism, through science and technology, will be far more successful than outdated superstitions. Biotechnological advances are more reliable for this worldly benefit than religion’s promises of some mysterious next life. Transhumanists claim no need for “supernatural powers or divine intervention” in their avowedly “naturalistic outlook” since they rely instead on “rational thinking and empiricism” and “continued scientific, technological, economic, and human development.” Nonetheless, Bostrom and his companions recognize that not all technology is created equal.

Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.

You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.

Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.

When Descartes said “I think therefore I am” he probably didn’t know that he was answering a security question. Using behavioral or physical characteristics to identify people, biometrics, has gotten a big boost in the EU. The Orwellian sounding HUMABIO (Human Monitoring and Authentication using Biodynamic Indicators and Behavioral Analysis) is a well funded research project that seeks to combine sensor technology with the latest in biometrics to find reliable and non-obtrusive ways to identify people quickly. One of their proposed methods: scanning your brain stem. That’s right, in addition to reading your retinas, looking at your finger prints, and monitoring your voice, the security systems of the future may be scanning your brain.

How could they actually read your brain? What kind of patterns would they use to authenticate your identity? Yeah, that haven’t quite figured that out yet. HUMABIO is still definitely in the “pre-commercial” and “proof of concept” phase. They do have a nice ethics manual to read, and they’ve actually written some “stories” that illustrate the uses of their various works in progress, but they haven’t produced a fieldable instrument yet. In fact, this aspect of the STREP (Specific Targeted Research Project) would hardly be remarkable if we didn’t know more about the available technology from other sources.

On the day of the ChatGPT-4o announcement, Sam Altman sat down to share behind-the-scenes details of the launch and offer his predictions for the future of AI. Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.

(00:00) Intro.
(00:50) The Personal Impact of Leading OpenAI
(01:44) Unveiling Multimodal AI: A Leap in Technology.
(02:47) The Surprising Use Cases and Benefits of Multimodal AI
(03:23) Behind the Scenes: Making Multimodal AI Possible.
(08:36) Envisioning the Future of AI in Communication and Creativity.
(10:21) The Business of AI: Monetization, Open Source, and Future Directions.
(16:42) AI’s Role in Shaping Future Jobs and Experiences.
(20:29) Debunking AGI: A Continuous Journey Towards Advanced AI
(24:04) Exploring the Pace of Scientific and Technological Progress.
(24:18) The Importance of Interpretability in AI
(25:11) Navigating AI Ethics and Regulation.
(27:26) The Safety Paradigm in AI and Beyond.
(28:55) Personal Reflections and the Impact of AI on Society.
(29:11) The Future of AI: Fast Takeoff Scenarios and Societal Changes.
(30:59) Navigating Personal and Professional Challenges.
(40:21) The Role of AI in Creative and Personal Identity.
(43:09) Educational System Adaptations for the AI Era.
(44:30) Contemplating the Future with Advanced AI

Executive Producer: Rashad Assir.
Producer: Leah Clapper.
Mixing and editing: Justin Hrabovsky.

Check out Unsupervised Learning, Redpoint’s AI Podcast: / @redpointai.

🎙 Listen to the show.
Apple Podcasts: https://podcasts.apple.com/us/podcast
Spotify: https://open.spotify.com/show/5WqBqDb
Google Podcasts: https://podcasts.google.com/feed/aHR0

🎥 Subscribe on YouTube: / @theloganbartlettshow.

Feeling Bad About Feeling Good?


Summary: A new study explores the complex moral landscape of revenge, revealing that people’s reactions to revenge vary significantly based on the emotions displayed by the avenger. Conducted across four surveys involving Polish students and American adults, the study found that avengers who demonstrate satisfaction are viewed as more competent, whereas those expressing pleasure are seen as immoral.

These perceptions shift dramatically when individuals imagine themselves in the avenger’s shoes, tending to view their own actions as less moral compared to others. The findings challenge conventional views on revenge, suggesting that societal and personal perspectives on morality and competence deeply influence judgments of revengeful actions.

This spring, the Hastings Center Report added a new series of essays named after the field its pieces aim to explore. Neuroscience and Society produces open access articles and opinion pieces that address the ethical, legal, and societal issues presented by emerging neuroscience. The series will run roughly twice a year and was funded by the Dana Foundation to foster dynamic, sustained conversation among neuroscience researchers, legal and ethics scholars, policymakers, and wider publics.

The first edition of the series focuses on the topic of research studies and what is owed to people who volunteer to participate in clinical trials to develop implantable brain devices, such as deep-brain stimulators and brain-computer interfaces.

Imagine you have lived with depression for most of your life. Despite trying numerous medications and therapies, such as electroconvulsive therapy, you have not been able to manage your symptoms effectively. Your depression keeps you from maintaining a job, interacting with your friends and family, and generally prevents you from flourishing as a person.

A recent study revealed that when individuals are given two solutions to a moral dilemma, the majority tend to prefer the answer provided by artificial intelligence (AI) over that given by another human.

The recent study, which was conducted by Eyal Aharoni, an associate professor in Georgia State’s Psychology Department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) which came onto the scene last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni said. “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse. So, if we want to use these tools, we should understand how they operate, their limitations, and that they’re not necessarily operating in the way we think when we’re interacting with them.”