Toggle light / dark theme

Is our brain responsible for how we react to people who are different from us? Why can’t people with autism tell lies? How does the brain produce empathy? Why is imitation a fundamental trait of any social interaction? What are the secret advantages of teamwork? How does the social environment influence the brain? Why is laughter different from any other emotion?

This course is aimed at deepening our understanding of how the brain shapes and is shaped by social behavior, exploring a variety of topics such as the neural mechanisms behind social interactions, social cognition, theory of mind, empathy, imitation, mirror neurons, interacting minds, and the science of laughter.

Serious Science experts from leading universities worldwide answer these and other questions. This course offers a range of scientific perspectives on classical philosophical problems in ethics. It is comprised of 10 lectures filmed from 2014 to 2020. If you have any questions or comments on the content of this course, please write to us at [email protected].

HomePage

Follow us:
We are on Patreon: / seriousscience.
Facebook — / serious.science.org.
Twitter — / scienceserious.
YouTube — / seriousscience.
Instagram — / serious.science

From the article:

Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy.


Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30–31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.

For twenty years, I have been talking about old age dependency ratios as an argument for universal basic income and investing in anti-aging therapies to keep elders healthy longer. A declining number of young workers supporting a growing number of retirees is straining many welfare systems. Healthy seniors are less expensive and work longer. UBI is more intergenerationally equitable, especially if we face technological unemployment.

But as a person anticipating grandchildren, I think the declining fertility part of the demographic shift is more on my mind. It’s apparently on the minds of a growing number of people, including folks on the Right, ranging from those worried that feminists are pushing humanity to suicide or that there won’t be enough of their kind of people in the future to those worried about the health of innovation and the economy. The reluctance by the Left to entertain any pronatalism is understandable, given the reactionary ways it has been promoted. But I believe a progressive pro-family agenda is possible.

“We need a defined framework, but instead what we see here is a fairly wild race between labs,” one journal editor told me during the ISSCR meeting. “The overarching question is: How far do they go, and where do we place them in a legal-moral spectrum? How can we endorse working with these models when they are much further along than we were two years ago?”

So where will the race lead? Most scientists say the point of mimicking the embryo is to study it during the period when it would be implanting in the wall of the uterus. In humans, this moment is rarely observed. But stem-cell embryos could let scientists dissect these moments in detail.

Yet it’s also possible that these lab embryos turn out to be the real thing—so real that if they were ever transplanted into a person’s womb, they could develop into a baby.

Want to go on an unforgettable trip? Abstract Submission closing soon! Exciting news from SnT, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg! We are thrilled to announce the 1st European Interstellar Symposium in collaboration with esteemed partners like the Interstellar Research Group, Initiative & Institute for Interstellar Studies, Breakthrough Prize Foundation, and Luxembourg Space Agency. This interdisciplinary symposium will delve into the profound questions surrounding interstellar travel, exploring topics such as human and robotic exploration, propulsion, exoplanet research, life support systems, and ethics. Join us to discuss how these insights will impact near-term applications on Earth and in space, covering technologies like optical communications, ultra-lightweight materials, and artificial intelligence. Don’t miss this opportunity to connect with a community of experts and enthusiasts, all united in a common goal. Check out the “Call for Papers” link in the comment section to secure your spot! Image credit: Maciej Rębisz, Science Now Studio #interstellar #conference #Luxembourg #exoplanet

How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.


These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.

Enterprise versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.

Those who know Oxford University for its literary luminaries might be surprised to learn that some of the most important reflections on emerging technologies come from its hallowed halls. While the leading tech innovators in Silicon Valley capture imaginations with their bold visions of future singularities, mind-machine melding, and digital immortality by 2045, they rarely engage as deeply with the philosophical issues surrounding such developments as their like-minded scholars over the pond. This essay will briefly highlight some of the key contributions of Oxford University’s professors Nick Bostrom, Anders Sandberg, and Julian Savulescu to the transhumanist movement. It will also show how this movement’s focus on radical autonomy in biotechnical enhancements shapes the wider global bioethical conversation.

As the lead author of the Transhumanist FAQ, Bostrom provides the closest the movement has to an institutional catechism. He is, in a sense, the Ratzinger of Transhumanism. The first paragraph of the seminal text emphasizes the evolutionary vision of his school. Transhumanism’s incessant pursuit of radical technological transformation is “based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.” Current humans are but one intriguing yet greatly improvable iteration of human existence. Think of the first iPhone and how unattractive 2007’s most cutting-edge technology is in 2024.

In particular, transhumanists encourage radical physical, cognitive, mood, moral, and lifespan enhancements. The movement seeks to defeat humanity’s perennial enemies of aging, sickness, suffering, and death. Bostrom recognizes that he is facing the same foes as Christianity and other traditional religions. Yet he is confident that Transhumanism, through science and technology, will be far more successful than outdated superstitions. Biotechnological advances are more reliable for this worldly benefit than religion’s promises of some mysterious next life. Transhumanists claim no need for “supernatural powers or divine intervention” in their avowedly “naturalistic outlook” since they rely instead on “rational thinking and empiricism” and “continued scientific, technological, economic, and human development.” Nonetheless, Bostrom and his companions recognize that not all technology is created equal.

Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.

You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.

Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.

When Descartes said “I think therefore I am” he probably didn’t know that he was answering a security question. Using behavioral or physical characteristics to identify people, biometrics, has gotten a big boost in the EU. The Orwellian sounding HUMABIO (Human Monitoring and Authentication using Biodynamic Indicators and Behavioral Analysis) is a well funded research project that seeks to combine sensor technology with the latest in biometrics to find reliable and non-obtrusive ways to identify people quickly. One of their proposed methods: scanning your brain stem. That’s right, in addition to reading your retinas, looking at your finger prints, and monitoring your voice, the security systems of the future may be scanning your brain.

How could they actually read your brain? What kind of patterns would they use to authenticate your identity? Yeah, that haven’t quite figured that out yet. HUMABIO is still definitely in the “pre-commercial” and “proof of concept” phase. They do have a nice ethics manual to read, and they’ve actually written some “stories” that illustrate the uses of their various works in progress, but they haven’t produced a fieldable instrument yet. In fact, this aspect of the STREP (Specific Targeted Research Project) would hardly be remarkable if we didn’t know more about the available technology from other sources.