Toggle light / dark theme

Can virtual reality (VR) be tailored to explore larger areas and allow users to “walk” around their environment? This is what a recent study published in IEEE Transactions on Visualization and Computer Graphics hopes to address as a team of international researchers have developed a new VR system called RedirectedDoors+ that can allow users to expand their environments beyond the real-world physical boundaries, such as walls and doors. This study holds the potential to not only expand VR environments but also drastically reduce the real-world environments that are typically required for VR experiences.

“Our system, which built upon an existing visuo-haptic door-opening redirection technique, allows participants to subtly manipulate the walking direction while opening doors in VR, guiding them away from real walls,” said Dr. Kazuyuki Fujita, who is an assistant professor in the Research Institute of Electrical Communication (RIEC) at Tohoku University and a co-author on the study. “At the same time, our system reproduces the realistic haptics of touching a doorknob, enhancing the quality of the experience.”

A team of researchers from Japan has made it possible for humans to talk to each other in a robotic form. And they claim that it feels exactly like talking with real people.

They made a half-humanoid called ‘Yui’, which would be controlled by a real person wearing virtual reality goggles and a microphone headset. These gadgets let them see and hear what Yui sees and hears and even copy their facial expressions and voice.

Apple Vision Pro may gain the ability to play SteamVR games, thanks to developers who have begun porting the open-source ALVR software.

ALVR is software that enables streaming VR games to virtual reality headsets. The adaptation of ALVR allows users to enjoy SteamVR games on Apple Vision Pro’s Micro-OLED displays.

However, interacting with these games requires a specific type of controller that tracks itself instead of one tracked by a headset.

Summary: Researchers unveiled a pioneering technology capable of real-time human emotion recognition, promising transformative applications in wearable devices and digital services.

The system, known as the personalized skin-integrated facial interface (PSiFI), combines verbal and non-verbal cues through a self-powered, stretchable sensor, efficiently processing data for wireless communication.

This breakthrough, supported by machine learning, accurately identifies emotions even under mask-wearing conditions and has been applied in a VR “digital concierge” scenario, showcasing its potential to personalize user experiences in smart environments. The development is a significant stride towards enhancing human-machine interactions by integrating complex emotional data.

The technology can reconstruct a hidden scene in just minutes using advanced mathematical algorithms.


Potential use case scenarios

Law enforcement agencies could use the technology to gather critical information about a crime scene without disturbing the evidence. This could be especially useful in cases where the scene is dangerous or difficult to access. For example, the technology could be used to reconstruct the scene of a shooting or a hostage situation from a safe distance.

The technology could also have applications in the entertainment industry. For instance, it could create immersive gaming experiences that allow players to explore virtual environments in 3D. It could also be used in the film industry to create more realistic special effects.