Training a neural net to decode a person’s brain signals and send them to a robotic limb led to better, precise control over prosthetics.
Category: robotics/AI – Page 2,117
Our smartphones are cold, passive devices that usually can’t move autonomously unless they’re falling onto our faces while we’re looking at them in bed. A research team in France is exploring ways to change that by giving our smartphones the ability to interact with us more (via New Scientist). MobiLimb is a robotic finger attachment that plugs in through a smartphone’s Micro USB port, moves using five servo motors, and is powered by an Arduino microcontroller. It can tap the user’s hand in response to phone notifications, be used as a joystick controller, or, with the addition of a little fuzzy sheath accessory, it can turn into a cat tail.
AIST recently unveiled its HRP-5, the fifth iteration of a humanoid robot design at the International Conference on Intelligent Robots. The long-term vision for the robot is to assist, not replace, workers with certain repetitive tasks. HRP-4 weighed 39kg and stood 151 cm tall with a 0.5kg maximum payload (per hand), so most likely the newest version will at the least boast an increased payload and sensor technology.
Facebook: fb.com/interestingengineering
Twitter: twitter.com/intengineering
Instagram: instagram.com/interestingengineering
Linkedin: linkedin.com/company/10070590
This is not the end of the world but the end of competition as we know it.
Click investigates the rise of the robot butler, looking at whether voice-controlled personal assistants live up to the hype and at the potential dangers of living in a world where all electronics are controlled by voice.
Subscribe HERE http://bit.ly/1uNQEWR
Find us online at www.bbc.com/click
Twitter: @bbcclick
Facebook: www.facebook.com/BBCClick
General interest.
IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. Their designs draw on concepts from the human brain and significantly outperform conventional computers in comparative studies. They report on their recent findings in the Journal of Applied Physics.
Today’s computers are built on the von Neumann architecture, developed in the 1940s. Von Neumann computing systems feature a central processer that executes logic and arithmetic, a memory unit, storage, and input and output devices. Unlike the stovepipe components in conventional computers, the authors propose that brain-inspired computers could have coexisting processing and memory units.
Abu Sebastian, an author on the paper, explained that executing certain computational tasks in the computer’s memory would increase the system’s efficiency and save energy.
Quantum computing isn’t going to revolutionize AI anytime soon, according to a panel of experts in both fields.
Different worlds: Yoshua Bengio, one of the fathers of deep learning, joined quantum computing experts from IBM and MIT for a panel discussion yesterday. Participants included Peter Shor, the man behind the most famous quantum algorithm. Bengio said he was keen to explore new computer designs, and he peppered his co-panelists with questions about what a quantum computer might be capable of.
Quantum leaps: The panels quantum experts explained that while quantum computers are scaling up, it will be a while—we’re talking years here—before they could do any useful machine learning, partly because a lot of extra qubits will be needed to do the necessary error corrections. To complicate things further, it isn’t very clear what, exactly, quantum computers will be able to do better than their classical counterparts. But both Aram Harrow of MIT and IBM’s Kristian Temme said that early research on quantum machine learning is under way.
When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.
MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.
By translating a key human physical dynamic skill — maintaining whole-body balance — into a mathematical equation, the team was able to use the numerical formula to program their robot Mercury, which was built and tested over the course of six years. They calculated the margin of error necessary for the average person to lose one’s balance and fall when walking to be a simple figure — 2 centimeters.
“Essentially, we have developed a technique to teach autonomous robots how to maintain balance even when they are hit unexpectedly, or a force is applied without warning,” Sentis said. “This is a particularly valuable skill we as humans frequently use when navigating through large crowds.”
Sentis said their technique has been successful in dynamically balancing both bipeds without ankle control and full humanoid robots.