Toggle light / dark theme

From Watson to Sophia, who are the artificially intelligent robot personas of today, and what can they tell us about our future?

Siri. Alexa. Cortana. These familiar names are the modern-day Girl Fridays making everyone’s life easier. These virtual assistants powered by artificial intelligence (AI) bring to life the digital tools of the information age. One of the subtle strategies designers use to make it easier for us to integrate AI into our lives is “anthropomorphism” - the attribution of human-like traits to non-human objects. However, the rise of AI with distinct personalities, voices, and physical forms is not as benign as it might seem. As futurists who are interested in the impacts of technology on society, we wonder what role human-like technologies play in achieving human-centred futures.

For example, do anthropomorphized machines enable a future wherein humanity can thrive? Or, do human-like AIs foreshadow a darker prognosis, particularly in relation to gender roles and work? This article looks at a continuum of human-like personas that give a face to AI technology. We ask: what does it mean for our collective future that technology is increasingly human-like and gendered? And, what does it tell us about our capacity to create a very human future?

The Women of AI

What happens when a man is merged with a computer or a robot? This is the question that Professor Kevin Warwick and his team at the department of Cybernetics, University of Reading in the UK have been trying to answer for a number of years.

There are many ways to look at this problem. There is the longer term prospect of freeing the mind from the limitations of the brain by uploading it in digital form, potentially onto a computer and/or robotic substrate (see the h+ interview with Dr. Bruce Katz, Will We Eventually Upload Our Minds?). There is also a shorter term prospect at a much more limited scale — a robot controlled by human brain cells could soon be wandering around Professor Warwick’s UK labs.

Read more

Decades after Isaac Asimov first wrote his laws for robots, their ever-expanding role in our lives requires a radical new set of rules, legal and AI expert Frank Pasquale warned on Thursday.

The world has changed since sci-fi author Asimov in 1942 wrote his three rules for robots, including that they should never harm humans, and today’s omnipresent computers and algorithms demand up-to-date measures.

According to Pasquale, author of “The Black Box Society: The Secret Algorithms Behind Money and Information”, four new legally-inspired rules should be applied to robots and AI in our daily lives.

Read more

Air pollution is responsible for millions of deaths every year, worldwide. According to a State of Global Air report, air pollution is the fifth greatest global mortality risk.

Air is the fifth highest cause of death among all health risks, ranking just below smoking; each year, more people die from related disease than from road traffic injuries or malaria.”

No wonder, then, that when Google.org issued an open call to organizations around the world to submit ideas for how they could use AI for societal challenges, Google chose one of the 20 winning organizations as one that was out to address pollution.

Read more

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do — take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.

Most AI agents — computer systems that could endow robots or other machines with intelligence — are trained for very specific tasks — such as to recognize an object or estimate its volume — in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.

“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”

Read more

Our brains have a remarkable knack for picking out individual voices in a noisy environment, like a crowded coffee shop or a busy city street. This is something that even the most advanced hearing aids struggle to do. But now Columbia engineers are announcing an experimental technology that mimics the brain’s natural aptitude for detecting and amplifying any one voice from many. Powered by artificial intelligence, this brain-controlled hearing aid acts as an automatic filter, monitoring wearers’ brain waves and boosting the voice they want to focus on.

Though still in early stages of development, the technology is a significant step toward better hearing aids that would enable wearers to converse with the people around them seamlessly and efficiently. This achievement is described today in Science Advances.

“The area that processes sound is extraordinarily sensitive and powerful; it can amplify one voice over others, seemingly effortlessly, while today’s hearings aids still pale in comparison,” said Nima Mesgarani, Ph.D., a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper’s senior author. “By creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do.”

Read more