Toggle light / dark theme

In organisms, fluid is what binds the organs, the and the musculoskeletal system as a whole. For example, hemolymph, a blood-like fluid in a spider’s body, enables muscle activation and exoskeleton flexibility. It was the cucumber spider inhabiting Estonia that inspired scientists to create a complex , where soft and rigid parts are made to work together and are connected by a liquid.

According to Indrek Must, Associate Professor of Soft Robotics, the designed soft robot is based on real reason. “Broadly speaking, our goal is to build systems from both natural and artificial materials that are as effective as in wildlife. The robotic leg could touch delicate objects and move in the same complex environments as a living spider,” he explains.

In a published in the journal Advanced Functional Materials, the researchers show how a robotic foot touches a primrose stamen, web, and pollen grain. This demonstrates the soft robot’s ability to interact with very small and delicate structures without damaging them.

But out of everything Meta announced, one particular demo blew my mind. Meta AI comes with its own image-generation tool called Imagine, which is available in beta to some WhatsApp and web users. The new Meta AI feature can do something OpenAI’s ChatGPT can’t: It creates images instantly with no waiting necessary.

This is the second time an AI product has blown my mind this week. Earlier, I showed you Microsoft’s VASA-1 tool, which generates talking video clips out of a portrait image and a voice recording. VASA-1 isn’t made for the public though, and we might never get access to this particular AI. Anyone could create misleading fakes with it, so Microsoft is only showing off a proof of concept.

Sign up for the most interesting tech & entertainment news out there.

Sony has shown off its new surgical robot doing some super-precise work sewing up a tiny slit in a corn kernel. It’s the first machine of its kind that auto-switches between its different tools, and has successfully been tested in animal surgery.

It’s designed to help in the field of super-microsurgery, a highly specialized field in which surgeons operate on extremely small blood vessels and nerves, with diameters well under 1 mm (0.04 in). As you might imagine, this kind of thing requires incredibly steady hands, and specialists in this field often do their work whole looking through a microscope.

Thus, it’s an ideal place for some robotic assistance, and there are a number of surgical robots already in clinical use from companies like Intuitive Surgical, Stryker and others. We’re not talking fully autonomous AI-powered robot surgeons here, we’re talking teleoperation tools that allow surgeons to magnify their vision while shrinking their hand motions.

From Stanford TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction.

From Stanford.

TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction.

Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots.


“This is a really nice way of incorporating something you know about your physical system deep inside your machine-learning scheme. It goes far beyond just performing feature engineering on your data samples or simple inductive biases,” Schäfer says.

This generative classifier can determine what phase the system is in given some parameter, like temperature or pressure. And because the researchers directly approximate the probability distributions underlying measurements from the physical system, the classifier has system knowledge.

This enables their method to perform better than other machine-learning techniques. And because it can work automatically without the need for extensive training, their approach significantly enhances the computational efficiency of identifying .