In the George Lucas classic Star Wars, hero Luke Skywalker’s arm is severed and amputated during a lightsaber fight and consequently fitted with a bionic arm that he can use as if it were his own limb. At the time the script was written, such a remedy was pure science fiction; however, the ability to manufacture bionic arms that have the functionality and even feel of a natural limb is becoming very real, with goals of launching a prototype as soon as 2009. Already, primates have been trained to feed themselves using a robotic arm merely by thinking about it, while brain sensors have been picking up their brain-signal patterns since 2003. The time has come for implementing this technology on paralyzed human patients and amputees. This article will provide a brief explanation of the technology, its current status, and the potential future it holds.
Category: robotics/AI – Page 2257
Although we strive towards a limited invasive BMI technology; public adoption will (like autonomous AI) will be limited until the public and even private networks are updated and secured with QC.
The driver mentally commands the car to accelerate or brake. True, the autopilot is a joy and wonder to marvel, but what about those who still love driving? Good new, soon enough it will be possible to drive with your mind.
Computers are getting smarter, but first they’re stuck in some sort of uncanny valley of intelligence, reassembling normal, everyday objects into increasingly creepy combinations. First came the revelations of Google’s DeepDream technology, which, in learning to “see” objects, “saw” creepy multi-eyed organisms all over the place, turning the world into a half-sentient dog-like mess.
Now, researchers in Toronto have used a technology called “neural karaoke” to teach a computer to write a song after looking at a photo, and the little carol it penned after viewing a festive Christmas tree is an absolutely horrifying display of what these things think of us.
Big corporations prefer robots to human employees.
It’s a sign of things to come.
In the last five years, online shopping has produced tens of thousands of new warehouse jobs in California, many of them in Riverside and San Bernardino counties. The bulk of them paid blue collar people decent wages to do menial tasks – putting things in boxes and sending them out to the world.
But automated machines and software have been taking up more and more space in the region’s warehouses, and taking over jobs that were once done by humans. Today, fewer jobs are being added, though some of them pay more.
Step inside the portal and everything is white, calm, silent: this is where researchers are helping craft the future of virtual reality. I speak out loud, and my voice echoes around the empty space. In place of the clutter on the outside, each panel is unadorned, save for a series of small black spots: cameras recording your every move. There are 480 VGA cameras and 30 HD cameras, as well as 10 RGB-D depth sensors borrowed from Xbox gaming consoles. The massive collection of recording apparatus is synced together, and its collective output is combined into a single, digital file. One minute of recording amounts to 600GB of data.
The hundreds of cameras record people talking, bartering, and playing games. Imagine the motion-capture systems used by Hollywood filmmakers, but on steroids. The footage it records captures a stunningly accurate three-dimensional representation of people’s bodies in motion, from the bend in an elbow to a wrinkle in your brow. The lab is trying to map the language of our bodies, the signals and social cues we send one another with our hands, posture, and gaze. It is building a database that aims to decipher the constant, unspoken communication we all use without thinking, what the early 20th century anthropologist Edward Sapir once called an “elaborate code that is written nowhere, known to no one, and understood by all.”
The original goal of the Panoptic Studio was to use this understanding of body language to improve the way robots relate to human beings, to make them more natural partners at work or in play. But the research being done here has recently found another purpose. What works for making robots more lifelike and social could also be applied to virtual characters. That’s why this basement lab caught the attention of one of the biggest players in virtual reality: Facebook. In April 2015, the Silicon Valley giant hired Yaser Sheikh, an associate professor at Carnegie Mellon and director of the Panoptic Studio, to assist in research to improve social interaction in VR.