In an office at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), a soft robotic hand carefully curls its fingers to grasp a small object. The intriguing part isn’t the mechanical design or embedded sensors—in fact, the hand contains none. Instead, the entire system relies on a single camera that watches the robot’s movements and uses that visual data to control it.
This capability comes from a new system CSAIL scientists developed, offering a different perspective on robotic control. Rather than using hand-designed models or complex sensor arrays, it allows robots to learn how their bodies respond to control commands, solely through vision. The approach, called Neural Jacobian Fields (NJF), gives robots a kind of bodily self-awareness.
A paper about the work was published in Nature.