Toggle light / dark theme

AI Ethics And The Almost Sensible Question Of Whether Humans Will Outlive AI

I have a question for you that seems to be garnering a lot of handwringing and heated debates these days. Are you ready? Will humans outlive AI? Mull that one over. I am going to unpack the question and examine closely the answers and how the answers have been elucidated. My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics.


A worthy question is whether humans will outlive AI, though the worthiness of the question is perhaps different than what you think it is. All in all, important AI Ethics ramifications arise.

William MacAskill: ‘There are 80 trillion people yet to come. They need us to start protecting them’

All those numbers seem incalculably abstract but, according to the moral philosopher William MacAskill, they should command our attention. He is a proponent of what’s known as longtermism – the view that the deep future is something we have to address now. How long we last as a species and what kind of state of wellbeing we achieve, says MacAskill, may have a lot to do with what decisions we make and actions we take at the moment and in the foreseeable future.

That, in a nutshell, is the thesis of his new book, What We Owe the Future: A Million-Year View. The Dutch historian and writer Rutger Bregman calls the book’s publication “a monumental event”, while the US neuroscientist Sam Harris says that “no living philosopher has had a greater impact” upon his ethics.

We tend to think of moral philosophers as whiskery sages, but MacAskill is a youthful 35 and a disarmingly informal character in person, or rather on a Zoom call from San Francisco, where he is promoting the book.

How Scientists Revived Organs in Pigs an Hour After They Died

Yes, it does. Although OrganEx helps revitalize pigs’ organs, it’s far from a deceased animal being brought back to life. Rather, their organs were better protected from low oxygen levels, which occur during heart attacks or strokes.

“One could imagine that the OrganEx system (or components thereof) might be used to treat such people in an emergency,” said Porte.

The technology could also help preserve donor organs, but there’s a long way to go. To Dr. Brendan Parent, director of transplant ethics and policy research at NYU Grossman School of Medicine, OrganEx may force a rethink for the field. For example, is it possible that someone could have working peripheral organs but never regain consciousness? As medical technology develops, death becomes a process, not a moment.

Can we make the future a million years from now go better?

You can buy What We Owe the Future here: https://www.basicbooks.com/titles/william-macaskill/what-we-…541618626/

In his new book about longtermism, What We Owe the Future, the philosopher William MacAskill argues that concern for the long-term future should be a key moral priority of our time. There are three central claims that justify this view. 1. Future people matter. 2. There could be a lot of them. 3. We can make their lives go better. In this video, we focus on the third claim.

We’ve had the opportunity to read What We Owe the Future in advance thanks to the Forethought Foundation. They reached out asking if we could make a video on the occasion of the book launch. We were happy to collaborate, to help spread the ideas of the longtermist philosophy as far as possible smile

Interested in donating to safeguard the long-term future of humanity? You can donate to an expert managed fund at: https://www.givingwhatwecan.org/charities/longtermism-fund.

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🟠 Patreon: https://www.patreon.com/rationalanimations.

AI Ethics Wary About Worsening Of AI Asymmetry Amid Humans Getting The Short End Of The Stick

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching.


AI Asymmetry is getting larger and worsening, particularly via the advent of fully autonomous systems, and for which society needs to be aware of and considering devising remedies such as arming more with AI to essentially fight fire with fire.

Roadmap: AI’s next big steps in the world (BCIs, Xiaomi CyberOne, Tesla Optimus, UBI, Sam Altman…)

Read the paper: https://lifearchitect.ai/roadmap/
The Memo: https://lifearchitect.ai/memo/

Sources: See the paper above.

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.

Home

Music:
Under licence.

Hyundai Motor Group Launches Boston Dynamics AI Institute to Spearhead Advancements in Artificial Intelligence & Robotics

Boston Dynamics gets into AI.


SEOUL/CAMBRIDGE, MA, August 12, 2022 – Hyundai Motor Group (the Group) today announced the launch of Boston Dynamics AI Institute (the Institute), with the goal of making fundamental advances in artificial intelligence (AI), robotics and intelligent machines. The Group and Boston Dynamics will make an initial investment of more than $400 million in the new Institute, which will be led by Marc Raibert, founder of Boston Dynamics.

As a research-first organization, the Institute will work on solving the most important and difficult challenges facing the creation of advanced robots. Elite talent across AI, robotics, computing, machine learning and engineering will develop technology for robots and use it to advance their capabilities and usefulness. The Institute’s culture is designed to combine the best features of university research labs with those of corporate development labs while working in four core technical areas: cognitive AI, athletic AI, organic hardware design as well as ethics and policy.

“Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive and safer than anything that exists today,” said Marc Raibert executive director of Boston Dynamics AI Institute. “The unique structure of the Institute — top talent focused on fundamental solutions with sustained funding and excellent technical support — will help us create robots that are easier to use, more productive, able to perform a wider variety of tasks, and that are safer working with people.”

Digital security dialogue: Leveraging human verification to educate people about online safety

Online safety and ethics are serious issues and can adversely affect less experienced users. Researchers have built upon familiar human verification techniques to add an element of discrete learning into the process. This way users can learn about online safety and ethics issues while simultaneously verifying they are human. Trials show that users responded positively to the experience and felt they gained something from these microlearning sessions.

The internet is an integral part of modern living, for work, leisure, shopping, keeping touch with people, and more. It’s hard to imagine that anyone could live in an affluent country, such as Japan, and not use the internet relatively often. Yet despite its ubiquity, the internet is far from risk-free. Issues of safety and security are of great concern, especially for those with less exposure to such things. So a team of researchers from the University of Tokyo including Associate Professor Koji Yatani of the Department for Electrical Engineering and Information Systems set out to help.

Surveys of internet users in Japan suggest that a vast majority have not had much opportunity to learn about ways in which they can stay safe and secure online. But it seems unreasonable to expect this same majority to intentionally seek out the kind of information they would need to educate themselves. To address this, Yatani and his team thought they could instead introduce about online safety and ethics into a typical user’s daily experience. They chose to take advantage of something that many users will come across often during their usual online activities: human verification.

/* */