Toggle light / dark theme

Read the paper: https://lifearchitect.ai/roadmap/
The Memo: https://lifearchitect.ai/memo/

Sources: See the paper above.

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.

Home

Music:

Boston Dynamics gets into AI.


SEOUL/CAMBRIDGE, MA, August 12, 2022 – Hyundai Motor Group (the Group) today announced the launch of Boston Dynamics AI Institute (the Institute), with the goal of making fundamental advances in artificial intelligence (AI), robotics and intelligent machines. The Group and Boston Dynamics will make an initial investment of more than $400 million in the new Institute, which will be led by Marc Raibert, founder of Boston Dynamics.

As a research-first organization, the Institute will work on solving the most important and difficult challenges facing the creation of advanced robots. Elite talent across AI, robotics, computing, machine learning and engineering will develop technology for robots and use it to advance their capabilities and usefulness. The Institute’s culture is designed to combine the best features of university research labs with those of corporate development labs while working in four core technical areas: cognitive AI, athletic AI, organic hardware design as well as ethics and policy.

“Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive and safer than anything that exists today,” said Marc Raibert executive director of Boston Dynamics AI Institute. “The unique structure of the Institute — top talent focused on fundamental solutions with sustained funding and excellent technical support — will help us create robots that are easier to use, more productive, able to perform a wider variety of tasks, and that are safer working with people.”

Online safety and ethics are serious issues and can adversely affect less experienced users. Researchers have built upon familiar human verification techniques to add an element of discrete learning into the process. This way users can learn about online safety and ethics issues while simultaneously verifying they are human. Trials show that users responded positively to the experience and felt they gained something from these microlearning sessions.

The internet is an integral part of modern living, for work, leisure, shopping, keeping touch with people, and more. It’s hard to imagine that anyone could live in an affluent country, such as Japan, and not use the internet relatively often. Yet despite its ubiquity, the internet is far from risk-free. Issues of safety and security are of great concern, especially for those with less exposure to such things. So a team of researchers from the University of Tokyo including Associate Professor Koji Yatani of the Department for Electrical Engineering and Information Systems set out to help.

Surveys of internet users in Japan suggest that a vast majority have not had much opportunity to learn about ways in which they can stay safe and secure online. But it seems unreasonable to expect this same majority to intentionally seek out the kind of information they would need to educate themselves. To address this, Yatani and his team thought they could instead introduce about online safety and ethics into a typical user’s daily experience. They chose to take advantage of something that many users will come across often during their usual online activities: human verification.

They call it the holy grail of artificial intelligence research: Building a computer as smart as we are. Some say it could help eradicate poverty and create a more equal society – while others warn that it could become a threat to our very existence. But how far are we from reaching such “artificial general intelligence”? And what happens if machines, at some point, outsmart us?

In this episode of Techtopia, DW Chief Technology Correspondent Janosch Delcker takes a deep dive into the world of AI, meeting a scientist on the quest for human-level intelligence, and digging into the questions awaiting us as humanity inches closer to AGI.

Chapters:
0:00 Intro.
0:39 What to expect.
1:18 The Iceland case.
3:19 What is “AI”?
5:10: The long history of AI
7:17 The ups and downs of AI
8:12 How close is today’s AI to human-level intelligence?
11:54 What if machines become smarter than we are?
13:30 What will AI look like?
15:45 The ethics of artificial intelligence.
19:13 How far are we from reaching human-level AI?
20:43 The power of Big Tech.
21:23 The Iceland case conclusion.
21:52 Outro.

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

Within minutes of the final heartbeat, a cascade of biochemical events triggered by a lack of blood flow, oxygen, and nutrients begins to destroy a body’s cells and organs. But a team of Yale scientists has found that massive and permanent cellular failure doesn’t have to happen so quickly.


The researchers stressed that additional studies are necessary to understand the apparently restored motor functions in the animals, and that rigorous ethical review from other scientists and bioethicists is required.

The experimental protocols for the latest study were approved by Yale’s Institutional Animal Care and Use Committee and guided by an external advisory and ethics committee.

The OrganEx technology could eventually have several potential applications, the authors said. For instance, it could extend the life of organs in human patients and expand the availability of donor organs for transplant. It might also be able to help treat organs or tissue damaged by ischemia during heart attacks or strokes.

By Natasha Vita-More.

Has the technological singularity in 2019 changed since the late 1990s?

As a theoretical concept it has become more recognized. As a potential threat, it is significantly written about and talked about. Because the field of narrow AI is growing and machine learning has found a place in academics and entrepreneurs are investing in the growth of AI, tech leaders have come to the table and voiced their concerns, especially Bill Gates, Elon Musk, and the late Stephen Hawking. The concept of existential risk has taken a central position within the discussions about AI and machine ethicists are prepping their arguments toward a consensus that near-future robots will force us to rethink the exponential advances in the fields of robotics and computer science. Here it is crucial for those leaders in philosophy and ethics to address the concept of what an ethical machine means and the true goal of machine ethics.

Can the sum of knowledge and experience we’ve accumulated over a lifetime live on after we die? The concept of “mind-uploading” is a modern version of an age-old human dream. Transhumanism hopes to not only enhance human capacities but even transcend human limitations such as bodily death.

The main character of Oscar Wilde’s famous novel The Picture of Dorian Gray wishes for eternal youth. And his wish is fulfilled: Dorian Gray remains young and exquisitely beautiful, whereas his portrait grows old, bearing the burden of aging, human shortcomings and imperfections. As we know, the story ended badly for Dorian.

In our time, scientific discoveries and new technologies promise to bring us closer to his dream. And no deal with the Devil is needed for doing so: once we understand how to manipulate the building blocks of life as well as the material foundations of our consciousness, emotions and character traits, so the story goes, we will be able to broaden human nature and overcome its inherent limitations such as aging, suffering and cognitive, emotional and moral shortcomings.

There may be some very compelling tools and platforms that promise fair and balanced AI, but tools and platforms alone won’t deliver ethical AI solutions, says Reid Blackman, who provides avenues to overcome thorny AI ethics issues in his upcoming book, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent and Respectful AI (Harvard Business Review Press). He provides ethics advice to developers working with AI because, in his own words, “tools are efficiently and effectively wielded when their users are equipped with the requisite knowledge, concepts, and training.” To that end, Blackman provides some of the insights development and IT teams need to have to deliver ethical AI.

Don’t worry about dredging up your Philosophy 101 class notes

Considering prevailing ethical and moral theories and applying them to AI work “is a terrible way to build ethically sound AI,” Blackman says. Instead, work collaboratively with teams on practical approaches. “What matters for the case at hand is what [your team members] think is an ethical risk that needs to be mitigated and then you can get to work collaboratively identifying and executing on risk-mitigation strategies.”