Toggle light / dark theme

From virtual reality to rehabilitation and communication, haptic technology has revolutionized the way humans interact with the digital world. While early haptic devices focused on single-sensory cues like vibration-based notifications, modern advancements have paved the way for multisensory haptic devices that integrate various forms of touch-based feedback, including vibration, skin stretch, pressure, and temperature.

Recently, a team of experts, including Rice University’s Marcia O’Malley and Daniel Preston, graduate student Joshua Fleck, alumni Zane Zook ‘23 and Janelle Clark ‘22 and other collaborators, published an in-depth review in Nature Reviews Bioengineering analyzing the current state of wearable multisensory , outlining its challenges, advancements, and real-world applications.

Haptic devices, which enable communication through touch, have evolved significantly since their introduction in the 1960s. Initially, they relied on rigid, grounded mechanisms acting as user interfaces, generating force-based feedback from virtual environments.

Imagine an automated delivery vehicle rushing to complete a grocery drop-off while you are hurrying to meet friends for a long-awaited dinner. At a busy intersection, you both arrive at the same time. Do you slow down to give it space as it maneuvers around a corner? Or do you expect it to stop and let you pass, even if normal traffic etiquette suggests it should go first?

“As becomes a reality, these everyday encounters will define how we share the road with intelligent machines,” says Dr. Jurgis Karpus from the Chair of Philosophy of Mind at LMU. He explains that the arrival of fully automated self-driving cars signals a shift from us merely using —like Google Translate or ChatGPT—to actively interacting with them. The key difference? In busy traffic, our interests will not always align with those of the self-driving cars we encounter. We have to interact with them, even if we ourselves are not using them.

In a study published recently in the journal Scientific Reports, researchers from LMU Munich and Waseda University in Tokyo found that people are far more likely to take advantage of cooperative artificial agents than of similarly cooperative fellow humans. “After all, cutting off a robot in traffic doesn’t hurt its feelings,” says Karpus, lead author of the study.

Artificial Intelligence (AI) can perform complex calculations and analyze data faster than any human, but to do so requires enormous amounts of energy. The human brain is also an incredibly powerful computer, yet it consumes very little energy.

As increasingly expand, a new approach to AI’s “thinking,” developed by researchers including Texas A&M University engineers, mimics the and has the potential to revolutionize the AI industry.

Dr. Suin Yi, assistant professor of electrical and computer engineering at Texas A&M’s College of Engineering, is on a team of researchers that developed “Super-Turing AI,” which operates more like the human brain. This new AI integrates certain processes instead of separating them and then migrating huge amounts of data like current systems do.

A smaller, lighter and more energy-efficient computer, demonstrated at the University of Michigan, could help save weight and power for autonomous drones and rovers, with implications for autonomous vehicles more broadly.

The autonomous controller has among the lowest power requirements reported, according to the study published in Science Advances. It operates at a mere 12.5 microwatts—in the ballpark of a pacemaker.

In testing, a rolling robot using the controller was able to pursue a target zig-zagging down a hallway with the same speed and accuracy as with a conventional digital controller. In a second trial, with a lever-arm that automatically repositioned itself, the new controller did just as well.

When it comes to haptic feedback, most technologies are limited to simple vibrations. But our skin is loaded with tiny sensors that detect pressure, vibration, stretching and more. Now, Northwestern University engineers have unveiled a new technology that creates precise movements to mimic these complex sensations.

The study, “Full freedom-of-motion actuators as advanced haptic interfaces,” is published in the journal Science.

While sitting on the skin, the compact, lightweight, wireless device applies force in any direction to generate a variety of sensations, including vibrations, stretching, pressure, sliding and twisting. The device can also combine sensations and operate fast or slowly to simulate a more nuanced, realistic sense of touch.

Hydrogen is increasingly gaining attention as a promising energy source for a cleaner, more sustainable future. Using hydrogen to meet the energy demands for large-scale applications such as utility infrastructure will require transporting large volumes via existing pipelines designed for natural gas.

But there’s a catch. Hydrogen can weaken the that these pipelines are made of. When hydrogen atoms enter the steel, they diffuse into its microstructure and can cause the metal to become brittle, making it more susceptible to cracking. Hydrogen can be introduced into the steel during manufacturing, or while the pipeline is in service transporting oil and gas.

To better understand this problem, researcher Tonye Jack used the Canadian Light Source (CLS) at the University of Saskatchewan (USask) to capture a 3D view of the cracks formed in steels. Researchers have previously relied on two-dimensional imaging techniques, which don’t provide the same rich detail made possible with synchrotron radiation.

Chibueze Amanchukwu wants to fix batteries that haven’t been built yet. Demand for batteries is on the rise for EVs and the grid-level energy storage needed to transition Earth off fossil fuels. But more batteries will mean more of a dangerous suite of materials used to build them: PFAS, also known as “forever chemicals.”

“To address our needs as a society for electric vehicles and energy storage, we are coming up with more ,” said Amanchukwu, Neubauer Family Assistant Professor of Molecular Engineering in the UChicago Pritzker School of Molecular Engineering (UChicago PME). “You can see the dilemma.”

PFAS are a family of thousands of chemicals found in batteries but also everything from fast food wrappers and shampoo to firefighting foam and yoga pants. They keep scrambled eggs from sticking to pans and rain from soaking into jackets and paint, but the same water resistance that makes them useful also make them difficult to remove when they get into the water supply. This earned them the nickname “forever chemicals.”

Large Language Models (LLMs) have rapidly become an integral part of our digital landscape, powering everything from chatbots to code generators. However, as these AI systems increasingly rely on proprietary, cloud-hosted models, concerns over user privacy and data security have escalated. How can we harness the power of AI without exposing sensitive data?

A recent study, “Entropy-Guided Attention for Private LLMs,” by Nandan Kumar Jha, a Ph.D. candidate at the NYU Center for Cybersecurity (CCS), and Brandon Reagen, Assistant Professor in the Department of Electrical and Computer Engineering and a member of CCS, introduces a novel approach to making AI more secure.

The paper was presented at the AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI 25) in early March and is available on the arXiv preprint server.

Most computers run on microchips, but what if we’ve been overlooking a simpler, more elegant computational tool all this time? In fact, what if we were the computational tool?

As crazy as it sounds, a future in which humans are the ones doing the computing may be closer than we think. In an article published in IEEE Access, Yo Kobayashi from the Graduate School of Engineering Science at the University of Osaka demonstrates that living tissue can be used to process information and solve complex equations, exactly as a computer does.

This achievement is an example of the power of the computational framework known as , in which data are input into a complex “reservoir” that has the ability to encode rich patterns. A computational model then learns to convert these patterns into meaningful outputs via a neural network.

More than seven years ago, cybersecurity researchers were thoroughly rattled by the discovery of Meltdown and Spectre, two major security vulnerabilities uncovered in the microprocessors found in virtually every computer on the planet.

Perhaps the scariest thing about these vulnerabilities is that they didn’t stem from typical software bugs or physical CPU problems, but from the actual processor architecture. These attacks changed our understanding of what can be trusted in a system, forcing to fundamentally reexamine where they put resources.

These attacks emerged from an optimization technique called “speculative execution” that essentially gives the processor the ability to execute multiple instructions while it waits for memory, before discarding the instructions that aren’t needed.