Toggle light / dark theme

You can buy What We Owe the Future here: https://www.basicbooks.com/titles/william-macaskill/what-we-…541618626/

In his new book about longtermism, What We Owe the Future, the philosopher William MacAskill argues that concern for the long-term future should be a key moral priority of our time. There are three central claims that justify this view. 1. Future people matter. 2. There could be a lot of them. 3. We can make their lives go better. In this video, we focus on the third claim.

We’ve had the opportunity to read What We Owe the Future in advance thanks to the Forethought Foundation. They reached out asking if we could make a video on the occasion of the book launch. We were happy to collaborate, to help spread the ideas of the longtermist philosophy as far as possible smile

Interested in donating to safeguard the long-term future of humanity? You can donate to an expert managed fund at: https://www.givingwhatwecan.org/charities/longtermism-fund.

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🟠 Patreon: https://www.patreon.com/rationalanimations.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching.


AI Asymmetry is getting larger and worsening, particularly via the advent of fully autonomous systems, and for which society needs to be aware of and considering devising remedies such as arming more with AI to essentially fight fire with fire.

Read the paper: https://lifearchitect.ai/roadmap/
The Memo: https://lifearchitect.ai/memo/

Sources: See the paper above.

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.

Home

Music:
Under licence.

Boston Dynamics gets into AI.


SEOUL/CAMBRIDGE, MA, August 12, 2022 – Hyundai Motor Group (the Group) today announced the launch of Boston Dynamics AI Institute (the Institute), with the goal of making fundamental advances in artificial intelligence (AI), robotics and intelligent machines. The Group and Boston Dynamics will make an initial investment of more than $400 million in the new Institute, which will be led by Marc Raibert, founder of Boston Dynamics.

As a research-first organization, the Institute will work on solving the most important and difficult challenges facing the creation of advanced robots. Elite talent across AI, robotics, computing, machine learning and engineering will develop technology for robots and use it to advance their capabilities and usefulness. The Institute’s culture is designed to combine the best features of university research labs with those of corporate development labs while working in four core technical areas: cognitive AI, athletic AI, organic hardware design as well as ethics and policy.

“Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive and safer than anything that exists today,” said Marc Raibert executive director of Boston Dynamics AI Institute. “The unique structure of the Institute — top talent focused on fundamental solutions with sustained funding and excellent technical support — will help us create robots that are easier to use, more productive, able to perform a wider variety of tasks, and that are safer working with people.”

Online safety and ethics are serious issues and can adversely affect less experienced users. Researchers have built upon familiar human verification techniques to add an element of discrete learning into the process. This way users can learn about online safety and ethics issues while simultaneously verifying they are human. Trials show that users responded positively to the experience and felt they gained something from these microlearning sessions.

The internet is an integral part of modern living, for work, leisure, shopping, keeping touch with people, and more. It’s hard to imagine that anyone could live in an affluent country, such as Japan, and not use the internet relatively often. Yet despite its ubiquity, the internet is far from risk-free. Issues of safety and security are of great concern, especially for those with less exposure to such things. So a team of researchers from the University of Tokyo including Associate Professor Koji Yatani of the Department for Electrical Engineering and Information Systems set out to help.

Surveys of internet users in Japan suggest that a vast majority have not had much opportunity to learn about ways in which they can stay safe and secure online. But it seems unreasonable to expect this same majority to intentionally seek out the kind of information they would need to educate themselves. To address this, Yatani and his team thought they could instead introduce about online safety and ethics into a typical user’s daily experience. They chose to take advantage of something that many users will come across often during their usual online activities: human verification.

They call it the holy grail of artificial intelligence research: Building a computer as smart as we are. Some say it could help eradicate poverty and create a more equal society – while others warn that it could become a threat to our very existence. But how far are we from reaching such “artificial general intelligence”? And what happens if machines, at some point, outsmart us?

In this episode of Techtopia, DW Chief Technology Correspondent Janosch Delcker takes a deep dive into the world of AI, meeting a scientist on the quest for human-level intelligence, and digging into the questions awaiting us as humanity inches closer to AGI.

Chapters:
0:00 Intro.
0:39 What to expect.
1:18 The Iceland case.
3:19 What is “AI”?
5:10: The long history of AI
7:17 The ups and downs of AI
8:12 How close is today’s AI to human-level intelligence?
11:54 What if machines become smarter than we are?
13:30 What will AI look like?
15:45 The ethics of artificial intelligence.
19:13 How far are we from reaching human-level AI?
20:43 The power of Big Tech.
21:23 The Iceland case conclusion.
21:52 Outro.

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

For more news go to: http://www.dw.com/en/
Follow DW on social media:
►Facebook: https://www.facebook.com/deutschewellenews/
►Twitter: https://twitter.com/dwnews.
►Instagram: https://www.instagram.com/dwnews.
Für Videos in deutscher Sprache besuchen Sie: https://www.youtube.com/dwdeutsch

Within minutes of the final heartbeat, a cascade of biochemical events triggered by a lack of blood flow, oxygen, and nutrients begins to destroy a body’s cells and organs. But a team of Yale scientists has found that massive and permanent cellular failure doesn’t have to happen so quickly.


The researchers stressed that additional studies are necessary to understand the apparently restored motor functions in the animals, and that rigorous ethical review from other scientists and bioethicists is required.

The experimental protocols for the latest study were approved by Yale’s Institutional Animal Care and Use Committee and guided by an external advisory and ethics committee.

The OrganEx technology could eventually have several potential applications, the authors said. For instance, it could extend the life of organs in human patients and expand the availability of donor organs for transplant. It might also be able to help treat organs or tissue damaged by ischemia during heart attacks or strokes.

By Natasha Vita-More.

Has the technological singularity in 2019 changed since the late 1990s?

As a theoretical concept it has become more recognized. As a potential threat, it is significantly written about and talked about. Because the field of narrow AI is growing and machine learning has found a place in academics and entrepreneurs are investing in the growth of AI, tech leaders have come to the table and voiced their concerns, especially Bill Gates, Elon Musk, and the late Stephen Hawking. The concept of existential risk has taken a central position within the discussions about AI and machine ethicists are prepping their arguments toward a consensus that near-future robots will force us to rethink the exponential advances in the fields of robotics and computer science. Here it is crucial for those leaders in philosophy and ethics to address the concept of what an ethical machine means and the true goal of machine ethics.