An ever-increasing number of research groups are developing tiny robots, capable of performing targeted drug-delivery inside the body. One of the latest such devices incorporates a flapping whale-flukes-like tail, along with wings that fold up or down as needed.
Category: robotics/AI – Page 1927
Decades after Isaac Asimov first wrote his laws for robots, their ever-expanding role in our lives requires a radical new set of rules, legal and AI expert Frank Pasquale warned on Thursday.
The world has changed since sci-fi author Asimov in 1942 wrote his three rules for robots, including that they should never harm humans, and today’s omnipresent computers and algorithms demand up-to-date measures.
According to Pasquale, author of “The Black Box Society: The Secret Algorithms Behind Money and Information”, four new legally-inspired rules should be applied to robots and AI in our daily lives.
Air pollution is responsible for millions of deaths every year, worldwide. According to a State of Global Air report, air pollution is the fifth greatest global mortality risk.
“Air pollution is the fifth highest cause of death among all health risks, ranking just below smoking; each year, more people die from air pollution related disease than from road traffic injuries or malaria.”
No wonder, then, that when Google.org issued an open call to organizations around the world to submit ideas for how they could use AI for societal challenges, Google chose one of the 20 winning organizations as one that was out to address pollution.
Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do — take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.
Most AI agents — computer systems that could endow robots or other machines with intelligence — are trained for very specific tasks — such as to recognize an object or estimate its volume — in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.
“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”
Our brains have a remarkable knack for picking out individual voices in a noisy environment, like a crowded coffee shop or a busy city street. This is something that even the most advanced hearing aids struggle to do. But now Columbia engineers are announcing an experimental technology that mimics the brain’s natural aptitude for detecting and amplifying any one voice from many. Powered by artificial intelligence, this brain-controlled hearing aid acts as an automatic filter, monitoring wearers’ brain waves and boosting the voice they want to focus on.
Though still in early stages of development, the technology is a significant step toward better hearing aids that would enable wearers to converse with the people around them seamlessly and efficiently. This achievement is described today in Science Advances.
“The brain area that processes sound is extraordinarily sensitive and powerful; it can amplify one voice over others, seemingly effortlessly, while today’s hearings aids still pale in comparison,” said Nima Mesgarani, Ph.D., a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper’s senior author. “By creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do.”
As Northrop Grumman’s NG-11 Cygnus spacecraft flew high above in low Earth orbit, NASA astronauts at the Johnson Space Center recently completed testing and evaluation of the company’s Earth-based full-scale cislunar habitat mockup.
Designed to test the ergonomics, feature layout and functional compatibility with basic “day-in-the-life” astronaut tasks for potential long-term use as a part of the future Lunar Gateway in cislunar space, the habitat mockup necessarily incorporated all core elements that would eventually be needed by a four-person Orion crew: sleep stations, a galley, crew exercise equipment and of course accommodations for science, a robotics workstations and life support systems.
The consortium, called LifeTime, aims to use three emerging technologies—machine learning, the study of single cells, and lab-grown organlike tissues called organoids—to map how human cells change over time and develop diseases. It is one of six candidates in the latest round of ambitious proposals for European flagships, billion-euro research projects intended to run for 10 years. There is just one snag: The European Commission has decided that it won’t launch any of them.
Six candidate research proposals lost in limbo.