Toggle light / dark theme

Why (most) future robots won’t look like robots

A future robot’s body could combine soft actuators and stiff structure, with distributed computation throughout — an example of the new “material robotics.” (credit: Nikolaus Correll/University of Colorado)

Future robots won’t be limited to humanoid form (like Boston Robotics’ formidable backflipping Atlas). They’ll be invisibly embedded everywhere in common objects.

Such as a shoe that can intelligently support your gait, change stiffness as you’re running or walking, and adapt to different surfaces — or even help you do backflips.

Software enables robots to be controlled in virtual reality

Even as autonomous robots get better at doing things on their own, there will still be plenty of circumstances where humans might need to step in and take control. New software developed by Brown University computer scientists enables users to control robots remotely using virtual reality, which helps users to become immersed in a robot’s surroundings despite being miles away physically.

The software connects a robot’s arms and grippers as well as its onboard cameras and sensors to off-the-shelf virtual reality hardware via the internet. Using handheld controllers, users can control the position of the robot’s arms to perform intricate manipulation tasks just by moving their own arms. Users can step into the robot’s metal skin and get a first-person view of the environment, or can walk around the robot to survey the scene in the third person—whichever is easier for accomplishing the task at hand. The data transferred between the robot and the virtual reality unit is compact enough to be sent over the internet with minimal lag, making it possible for users to guide robots from great distances.

“We think this could be useful in any situation where we need some deft manipulation to be done, but where people shouldn’t be,” said David Whitney, a graduate student at Brown who co-led the development of the system. “Three examples we were thinking of specifically were in defusing bombs, working inside a damaged nuclear facility or operating the robotic arm on the International Space Station.”

Eight planets in Kepler-90 system found using machine learning

Dec. 14 (UPI) — NASA scientists have found a planetary system with as many planets as our own.

“Scientists have found for the first time eight planets in a distant planetary system,” Paul Hertz, astrophysics division director at NASA Headquarters, said during a teleconference on Thursday that was live-streamed on NASA TV.

Astronomers were aware of seven of the eight planets orbiting the Kepler 90 star. The discovery of the new planet, Kepler-90i, was made possible by machine learning.

India’s grasp on IT jobs is loosening up. Is artificial intelligence to blame?

When Kumar lost his job, he became part of a wave of layoffs washing through the Indian IT industry—a term that includes, in its vastness, call centers, engineering services, business process outsourcing firms, and infrastructure management and software companies. The recent layoffs are part of the industry’s most significant period of churn since it began to boom two decades ago. Companies don’t necessarily attribute these layoffs directly to automation, but at the same time, they constantly identify automation as the spark for huge changes in the industry. Bots, machine learning, and algorithms that robotically execute processes are rendering old skills redundant, recasting the idea of work and making a smaller labor force seem likely.


Technology outsourcing has been India’s only reliable job creator in the past 30 years. Now artificial intelligence threatens to wipe out those gains.

About ispace

Ispace is a private lunar robotic exploration company that is developing micro-robotic technology to provide a low-cost and frequent transportation service to and on the Moon, conduct lunar surface exploration to map, process and deliver resources to our customers in cislunar space.

Artificially intelligent robots could soon gain consciousness

From babysitting children to beating the world champion at Go, robots are slowly but surely developing more and more advanced capabilities.

And many scientists, including Professor Stephen Hawking, suggest it may only be a matter of time before machines gain consciousness.

In a new article for The Conversation, Professor Subhash Kak, Regents Professor of Electrical and Computer Engineering at Oklahoma State University explains the possible consequences if artificial intelligence gains consciousness.

AI is now so complex its creators can’t trust why it makes decisions

Artificial intelligence is seeping into every nook and cranny of modern life. AI might tag your friends in photos on Facebook or choose what you see on Instagram, but materials scientists and NASA researchers are also beginning to use the technology for scientific discovery and space exploration.

But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.

Modern artificial intelligence is still new. Big tech companies have only ramped up investment and research in the last five years, after a decades-old theory was shown to finally work in 2012. Inspired by the human brain, an artificial neural network relied on layers of thousands to millions of tiny connections between “neurons” or little clusters of mathematic computation, like the connections of neurons in the brain. But that software architecture came with a trade-off: Since the changes throughout those millions of connections were so complex and minute, researchers aren’t able to exactly determine what is happening. They just get an output that works.

/* */