Toggle light / dark theme

A team of researchers led by the University of California San Diego has developed a soft, stretchy electronic device capable of simulating the feeling of pressure or vibration when worn on the skin. This device, reported in a paper published in Science Robotics (“Conductive block copolymer elastomers and psychophysical thresholding for accurate haptic effects”), represents a step towards creating haptic technologies that can reproduce a more varied and realistic range of touch sensations.

The device consists of a soft, stretchable electrode attached to a silicone patch. It can be worn like a sticker on either the fingertip or forearm. The electrode, in direct contact with the skin, is connected to an external power source via wires. By sending a mild electrical current through the skin, the device can produce sensations of either pressure or vibration depending on the signal’s frequency.

Soft, stretchable electrode recreates sensations of vibration or pressure on the skin through electrical stimulation. (Image: Liezel Labios, UC San Diego Jacobs School of Engineering)

With generative AI models, researchers combined robotics data from different sources to help robots learn better. MIT researchers developed a technique to combine robotics training data across domains, modalities, and tasks using generative AI models. They create a combined strategy from several different datasets that enables a robot to learn to perform new tasks in unseen environments.

Let’s say you want to train a robot so it understands how to use tools and can then quickly learn to make repairs around your house with a hammer, wrench, and screwdriver. To do that, you would need an enormous amount of data demonstrating tool use.

Existing robotic datasets vary widely in modality — some include color images while others are composed of tactile imprints, for instance. Data could also be collected in different domains, like simulation or human demos. And each dataset may capture a unique task and environment.

The next update to Tesla’s Full Self-Driving (Supervised) could arrive this weekend, as the long-awaited v12.4.2 is scheduled to enter an internal testing phase tomorrow.

Tesla first released version 12 of FSD in March, and it was a significant release because it was the first version that relied on end-to-end neural nets, instead of over 300,000 lines of hand-written code. With the switch, CEO Elon Musk said that each revision should result in significant improvements, saying that v12.4 should see a 5 to 10 times improvement in miles per intervention.

However, v12.4 was only released to a limited number of testers earlier this month, more than four weeks after Musk initially said it would be available, and it received a luke-warm response, with a number of bugs and erratic driving behaviours reported. As a result, it has yet to go to a wide release.

Go to https://ground.news/sabine to get 40% Off the Vantage plan and see through sensationalized reporting. Stay fully informed on events around the world with Ground News.

Geoffrey Hinton recently ignited a heated debate with an interview in which he says he is very worried that we will soon lose control over superintelligent AI. Meta’s AI chief Yann LeCun disagrees. I think they’re both wrong. Let’s have a look.

🤓 Check out my new quiz app ➜ http://quizwithit.com/
💌 Support me on Donorbox ➜ https://donorbox.org/swtg.
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine.
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsle
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXl
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder.
🖼️ On instagram ➜ / sciencewtg.

#science #sciencenews #artificialintelligence #ai #technews #tech #technology