Toggle light / dark theme

The Top 5 Science Fiction Books That Explore the Ethics of Cloning

FallenKingdomReads’ list of The Top 5 Science Fiction Books That Explore the Ethics of Cloning.

Cloning is a topic that has been explored in science fiction for many years, often raising questions about the ethics of creating new life forms. While the idea of cloning has been discussed in various forms of media, such as movies and TV shows, some of the most interesting and thought-provoking discussions on the topic can be found in books. Here are the top 5 science fiction books that explore the ethics of cloning.

Alastair Reynolds’ House of Suns is a space opera that explores the ethics of cloning on a grand scale. The book follows the journey of a group of cloned human beings known as “shatterlings” who travel the galaxy and interact with various other sentient beings. The book raises questions about the nature of identity and the value of individuality, as the shatterlings face challenges that force them to confront their own existence and the choices they have made.

Microsoft lays off an ethical AI team as it doubles down on OpenAI

Microsoft laid off an entire team dedicated to guiding AI innovation that leads to ethical, responsible and sustainable outcomes. The cutting of the ethics and society team, as reported by Platformer, is part of a recent spate of layoffs that affected 10,000 employees across the company.

The elimination of the team comes as Microsoft invests billions more dollars into its partnership with OpenAI, the startup behind art-and text-generating AI systems like ChatGPT and DALL-E 2, and revamps its Bing search engine and Edge web browser to be powered by a new, next-generation large language model that is “more powerful than ChatGPT and customized specifically for search.”

The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream.

Prof. KARL FRISTON 3.0 — Collective Intelligence [Special Edition]

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50
Support us! https://www.patreon.com/mlst.
MLST Discord: https://discord.gg/aNPkGUQtc5

TOC:
Intro [00:00:00]
Numerai (Sponsor segment) [00:07:10]
Designing Ecosystems of Intelligence from First Principles (Friston et al) [00:09:48]
Information / Infosphere and human agency [00:18:30]
Intelligence [00:31:38]
Reductionism [00:39:36]
Universalism [00:44:46]
Emergence [00:54:23]
Markov blankets [01:02:11]
Whole part relationships / structure learning [01:22:33]
Enactivism [01:29:23]
Knowledge and Language [01:43:53]
ChatGPT [01:50:56]
Ethics (is-ought) [02:07:55]
Can people be evil? [02:35:06]
Ethics in Al, subjectiveness [02:39:05]
Final thoughts [02:57:00]

References:

Think more rationally with Bayes’ rule | Steven Pinker

The formula for rational thinking explained by Harvard professor Steven Pinker.

Up next, The war on rationality ► https://youtu.be/qdzNKQwkp-Y

In his explanation of Bayes’ theorem, cognitive psychologist Steven Pinker highlights how this type of reasoning can help us determine the degree of belief we assign to a claim based on available evidence.

Bayes’ theorem takes into account the prior probability of a claim, the likelihood of the evidence given the claim is true, and the commonness of the evidence regardless of the claim’s truth.

While Bayes’ theorem can be useful for making statistical predictions, Pinker cautions that it may not always be appropriate in situations where fairness and other moral considerations are important. Therefore, it’s crucial to consider when Bayes’ theorem is applicable and when it’s not.

0:00 What is Bayesian thinking?

Opinion: Is it time to start considering personhood rights for AI chatbots?

Even a couple of years ago, the idea that artificial intelligence might be conscious and capable of subjective experience seemed like pure science fiction. But in recent months, we’ve witnessed a dizzying flurry of developments in AI, including language models like ChatGPT and Bing Chat with remarkable skill at seemingly human conversation.

Given these rapid shifts and the flood of money and talent devoted to developing ever smarter, more humanlike systems, it will become increasingly plausible that AI systems could exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of real emotions and suffering, we face a potentially catastrophic moral dilemma: either give those systems rights, or don’t.

Experts are already contemplating the possibility. In February 2022, Ilya Sutskever, chief scientist at OpenAI, publicly pondered whether “today’s large neural networks are slightly conscious.” A few months later, Google engineer Blake Lemoine made international headlines when he declared that the computer language model, or chatbot, LaMDA might have real emotions. Ordinary users of Replika, advertised as “the world’s best AI friend,” sometimes report falling in love with it.

CHM Seminar Series: Understanding Techno-Moral Revolutions — John Danaher

John Danaher, Senior Lecturer in Law at the National University of Ireland (NUI) Galway:

“Understanding Techno-Moral Revolutions”

Talk held on August 24, 2021 for Colloquium of the Center for Humans and Machines at the Max Planck Institute for Human Development, Berlin.

It is common to use ethical norms and standards to critically evaluate and regulate the development and use of emerging technologies like AI and Robotics. Indeed, the past few years has seen something of an explosion of interest in the ethical scrutiny of technology. What this emerging field of machine ethics tends to overlook, however, is the potential to use the development of novel technologies to critically evaluate our existing ethical norms and standards. History teaches us that social morality (the set of moral beliefs and practices shared within a given society) changes over time. Technology has sometimes played a crucial role in facilitating these historical moral revolutions. How will it do so in the future? Can we provide any meaningful answers to this question? This talk will argue that we can and will outline several tools for thinking about the mechanics of technologically-mediated moral revolutions.

About the Speaker:

John Danaher is a Senior Lecturer in Law at the National University of Ireland (NUI) Galway. He is the author of Automation and Utopia (Harvard 2019), co-author of A Citizen’s Guide to AI (MIT Press 2021) and the coeditor of Robot Sex: Social and Ethical Implications (MIT Press 2017). His research focuses on the ethics and law of emerging technologies. He has published papers on the risks of advanced AI, the meaning of life and the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. His work has appeared in The Guardian, Aeon, and The Philosophers’ Magazine.

Generative AI ChatGPT As Masterful Manipulator Of Humans, Worrying AI Ethics And AI Law

Those masterful manipulators. We’ve all dealt with those manipulative personalities that try to convince us that up is down and aim to gaslight us into the most unsettling of conditions. They somehow inexplicably and unduly twist words. Their rhetoric can be overtly powerful and overwhelming. You can’t decide what to do. Should you merely cave in and hope that the verbal tirade will end? But if you are played into doing something untoward, acquiescing might be quite endangering. Trying to verbally fight back is bound to be ugly and can devolve into even worse circumstances.

It can be a no-win situation, that’s for sure.


Now that I’ve covered some of the principle modes of AI and human manipulation, we can further unpack the matter. In today’s column, I will be addressing the gradually rising concern that AI is increasingly going to be manipulating us. I will look at the basis for these qualms. Furthermore, this will occasionally include referring to the AI app ChatGPT during this discussion since it is the 600-pound gorilla of generative AI, though do keep in mind that there are plenty of other generative AI apps and they generally are based on the same overall principles.

Meanwhile, you might be wondering what in fact generative AI is.

Let’s first cover the fundamentals of generative AI and then we can take a close look at the pressing matter at hand.

/* */