“We need a defined framework, but instead what we see here is a fairly wild race between labs,” one journal editor told me during the ISSCR meeting. “The overarching question is: How far do they go, and where do we place them in a legal-moral spectrum? How can we endorse working with these models when they are much further along than we were two years ago?”
So where will the race lead? Most scientists say the point of mimicking the embryo is to study it during the period when it would be implanting in the wall of the uterus. In humans, this moment is rarely observed. But stem-cell embryos could let scientists dissect these moments in detail.
Yet it’s also possible that these lab embryos turn out to be the real thing—so real that if they were ever transplanted into a person’s womb, they could develop into a baby.
Want to go on an unforgettable trip? Abstract Submission closing soon! Exciting news from SnT, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg! We are thrilled to announce the 1st European Interstellar Symposium in collaboration with esteemed partners like the Interstellar Research Group, Initiative & Institute for Interstellar Studies, Breakthrough Prize Foundation, and Luxembourg Space Agency. This interdisciplinary symposium will delve into the profound questions surrounding interstellar travel, exploring topics such as human and robotic exploration, propulsion, exoplanet research, life support systems, and ethics. Join us to discuss how these insights will impact near-term applications on Earth and in space, covering technologies like optical communications, ultra-lightweight materials, and artificial intelligence. Don’t miss this opportunity to connect with a community of experts and enthusiasts, all united in a common goal. Check out the “Call for Papers” link in the comment section to secure your spot! Image credit: Maciej Rębisz, Science Now Studio #interstellar #conference #Luxembourg #exoplanet
How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.
These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.
Enterprise versus consumer AI
Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.
Those who know Oxford University for its literary luminaries might be surprised to learn that some of the most important reflections on emerging technologies come from its hallowed halls. While the leading tech innovators in Silicon Valley capture imaginations with their bold visions of future singularities, mind-machine melding, and digital immortality by 2045, they rarely engage as deeply with the philosophical issues surrounding such developments as their like-minded scholars over the pond. This essay will briefly highlight some of the key contributions of Oxford University’s professors Nick Bostrom, Anders Sandberg, and Julian Savulescu to the transhumanist movement. It will also show how this movement’s focus on radical autonomy in biotechnical enhancements shapes the wider global bioethical conversation.
As the lead author of the Transhumanist FAQ, Bostrom provides the closest the movement has to an institutional catechism. He is, in a sense, the Ratzinger of Transhumanism. The first paragraph of the seminal text emphasizes the evolutionary vision of his school. Transhumanism’s incessant pursuit of radical technological transformation is “based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.” Current humans are but one intriguing yet greatly improvable iteration of human existence. Think of the first iPhone and how unattractive 2007’s most cutting-edge technology is in 2024.
In particular, transhumanists encourage radical physical, cognitive, mood, moral, and lifespan enhancements. The movement seeks to defeat humanity’s perennial enemies of aging, sickness, suffering, and death. Bostrom recognizes that he is facing the same foes as Christianity and other traditional religions. Yet he is confident that Transhumanism, through science and technology, will be far more successful than outdated superstitions. Biotechnological advances are more reliable for this worldly benefit than religion’s promises of some mysterious next life. Transhumanists claim no need for “supernatural powers or divine intervention” in their avowedly “naturalistic outlook” since they rely instead on “rational thinking and empiricism” and “continued scientific, technological, economic, and human development.” Nonetheless, Bostrom and his companions recognize that not all technology is created equal.
Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.
You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.
Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.
Disgust is one of the six basic human emotions, along with happiness, sadness, fear, anger, and surprise. Disgust typically arises when a person perceives a sensory stimulus or situation as revolting, off-putting, or unpleasant in other ways.
When Descartes said “I think therefore I am” he probably didn’t know that he was answering a security question. Using behavioral or physical characteristics to identify people, biometrics, has gotten a big boost in the EU. The Orwellian sounding HUMABIO (Human Monitoring and Authentication using Biodynamic Indicators and Behavioral Analysis) is a well funded research project that seeks to combine sensor technology with the latest in biometrics to find reliable and non-obtrusive ways to identify people quickly. One of their proposed methods: scanning your brain stem. That’s right, in addition to reading your retinas, looking at your finger prints, and monitoring your voice, the security systems of the future may be scanning your brain.
How could they actually read your brain? What kind of patterns would they use to authenticate your identity? Yeah, that haven’t quite figured that out yet. HUMABIO is still definitely in the “pre-commercial” and “proof of concept” phase. They do have a nice ethics manual to read, and they’ve actually written some “stories” that illustrate the uses of their various works in progress, but they haven’t produced a fieldable instrument yet. In fact, this aspect of the STREP (Specific Targeted Research Project) would hardly be remarkable if we didn’t know more about the available technology from other sources.
On the day of the ChatGPT-4o announcement, Sam Altman sat down to share behind-the-scenes details of the launch and offer his predictions for the future of AI. Altman delves into OpenAI’s vision, discusses the timeline for achieving AGI, and explores the societal impact of humanoid robots. He also expresses his excitement and concerns about AI personal assistants, highlights the biggest opportunities and risks in the AI landscape today, and much more.
(00:00) Intro. (00:50) The Personal Impact of Leading OpenAI (01:44) Unveiling Multimodal AI: A Leap in Technology. (02:47) The Surprising Use Cases and Benefits of Multimodal AI (03:23) Behind the Scenes: Making Multimodal AI Possible. (08:36) Envisioning the Future of AI in Communication and Creativity. (10:21) The Business of AI: Monetization, Open Source, and Future Directions. (16:42) AI’s Role in Shaping Future Jobs and Experiences. (20:29) Debunking AGI: A Continuous Journey Towards Advanced AI (24:04) Exploring the Pace of Scientific and Technological Progress. (24:18) The Importance of Interpretability in AI (25:11) Navigating AI Ethics and Regulation. (27:26) The Safety Paradigm in AI and Beyond. (28:55) Personal Reflections and the Impact of AI on Society. (29:11) The Future of AI: Fast Takeoff Scenarios and Societal Changes. (30:59) Navigating Personal and Professional Challenges. (40:21) The Role of AI in Creative and Personal Identity. (43:09) Educational System Adaptations for the AI Era. (44:30) Contemplating the Future with Advanced AI
Summary: A new study explores the complex moral landscape of revenge, revealing that people’s reactions to revenge vary significantly based on the emotions displayed by the avenger. Conducted across four surveys involving Polish students and American adults, the study found that avengers who demonstrate satisfaction are viewed as more competent, whereas those expressing pleasure are seen as immoral.
These perceptions shift dramatically when individuals imagine themselves in the avenger’s shoes, tending to view their own actions as less moral compared to others. The findings challenge conventional views on revenge, suggesting that societal and personal perspectives on morality and competence deeply influence judgments of revengeful actions.