Toggle light / dark theme

I have recently read the report from Sharad Agarwal, and here are my outcomes by adding some examples:

Transhumanism is the concept of transcending humanity’s fundamental limitations through advances in science and technology. This intellectual movement advocates for enhancing human physical, cognitive, and ethical capabilities, foreseeing a future where technological advancements will profoundly modify and improve human biology.

Consider transhumanism to be a kind of upgrade to your smartphone. Transhumanism, like updating our phones with the latest software to improve their capabilities and fix problems, seeks to use technological breakthroughs to increase human capacities. This could include strengthening our physical capacities to make us stronger or more resilient, improving our cognitive capabilities to improve memory or intelligence, or even fine-tuning moral judgments. Transhumanism, like phone upgrades, aspires to maximize efficiency and effectiveness by elevating the human condition beyond its inherent bounds.

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.

We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.

We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.

Eexxeccellent.


Human brains outperform computers in many forms of processing and are far more energy efficient. What if we could harness their power in a new form of biological computing?

In this Frontiers Forum Deep Dive session on 21 June 2023, Professor Thomas Hartung, Dr Lena Smirnova and other renowned researchers, explored the future of organoid intelligence and the scientific, technological and ethical steps required for realizing its full potential.

The session brought together the authors of the Frontiers in Science lead article ‘Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish’ which presents a roadmap for the strategic development of organoid intelligence as a scientific discipline. It was attended by hundreds of representatives from science, policy, and business across the world.

Links:

David is one of the world’s best-known philosophers of mind and thought leaders on consciousness. I was a freshman at the University of Toronto when I first read some of his work. Since then, Chalmers has been one of the few philosophers (together with Nick Bostrom) who has written and spoken publicly about the Matrix simulation argument and the technological singularity. (See, for example, David’s presentation at the 2009 Singularity Summit or read his The Singularity: A Philosophical Analysis)

During our conversation with David, we discuss topics such as: how and why Chalmers got interested in philosophy; and his search to answer what he considers to be some of the biggest questions – issues such as the nature of reality, consciousness, and artificial intelligence; the fact that academia in general and philosophy, in particular, doesn’t seem to engage technology; our chances of surviving the technological singularity; the importance of Watson, the Turing Test and other benchmarks on the way to the singularity; consciousness, recursive self-improvement, and artificial intelligence; the ever-shrinking of the domain of solely human expertise; mind uploading and what he calls the hard problem of consciousness; the usefulness of philosophy and ethics; religion, immortality, and life-extension; reverse engineering long-dead people such as Ray Kurzweil’s father.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

In this thought-provoking lecture, Prof. Jay Friedenberg from Manhattan College delves into the intricate interplay between cognitive science, artificial intelligence, and ethics. With nearly 30 years of teaching experience, Prof. Friedenberg discusses how visual perception research informs AI design, the implications of brain-machine interfaces, the role of creativity in both humans and AI, and the necessity for ethical considerations as technology evolves. He emphasizes the importance of human agency in shaping our technological future and explores the concept of universal values that could guide the development of AGI for the betterment of society.

00:00 Introduction to Jay Friedenberg.
01:02 Connecting Cognitive Science and AI
02:36 Human Augmentation and Technology.
03:50 Brain-Machine Interfaces.
05:43 Balancing Optimism and Caution in AI
07:52 Free Will vs Determinism.
12:34 Creativity in Humans and Machines.
16:45 Ethics and Value Alignment in AI
20:09 Conclusion and Future Work.

SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country.

The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.

Website: https://singularitynet.io.
X: https://x.com/SingularityNET
Instagram: / singularitynet.io.
Discord: / discord.
Forum: https://community.singularitynet.io.
Telegram: https://t.me/singularitynet.
WhatsApp: https://whatsapp.com/channel/0029VaM8
Warpcast: https://warpcast.com/singularitynet.
Mindplex Social: https://social.mindplex.ai/@Singulari
Github: https://github.com/singnet.
Linkedin: / singularitynet.

Is our brain responsible for how we react to people who are different from us? Why can’t people with autism tell lies? How does the brain produce empathy? Why is imitation a fundamental trait of any social interaction? What are the secret advantages of teamwork? How does the social environment influence the brain? Why is laughter different from any other emotion?

This course is aimed at deepening our understanding of how the brain shapes and is shaped by social behavior, exploring a variety of topics such as the neural mechanisms behind social interactions, social cognition, theory of mind, empathy, imitation, mirror neurons, interacting minds, and the science of laughter.

Serious Science experts from leading universities worldwide answer these and other questions. This course offers a range of scientific perspectives on classical philosophical problems in ethics. It is comprised of 10 lectures filmed from 2014 to 2020. If you have any questions or comments on the content of this course, please write to us at [email protected].

HomePage

Follow us:
We are on Patreon: / seriousscience.
Facebook — / serious.science.org.
Twitter — / scienceserious.
YouTube — / seriousscience.
Instagram — / serious.science

From the article:

Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy.


Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30–31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.

For twenty years, I have been talking about old age dependency ratios as an argument for universal basic income and investing in anti-aging therapies to keep elders healthy longer. A declining number of young workers supporting a growing number of retirees is straining many welfare systems. Healthy seniors are less expensive and work longer. UBI is more intergenerationally equitable, especially if we face technological unemployment.

But as a person anticipating grandchildren, I think the declining fertility part of the demographic shift is more on my mind. It’s apparently on the minds of a growing number of people, including folks on the Right, ranging from those worried that feminists are pushing humanity to suicide or that there won’t be enough of their kind of people in the future to those worried about the health of innovation and the economy. The reluctance by the Left to entertain any pronatalism is understandable, given the reactionary ways it has been promoted. But I believe a progressive pro-family agenda is possible.