Toggle light / dark theme

Thanks to their genetic makeup, their ability to navigate mazes and their willingness to work for cheese, mice have long been a go-to model for behavioral and neurological studies.

In recent years, they have entered a new arena—virtual reality—and now Cornell researchers have built miniature VR headsets to immerse them more deeply in it.

The team’s MouseGoggles—yes, they look as cute as they sound—were created using low-cost, off-the-shelf components, such as smartwatch displays and tiny lenses, and offer visual stimulation over a wide field of view while tracking the mouse’s eye movements and changes in pupil size.

We can expect to see more recommendations for VR in catastrophic injury cases.

Immersive Virtual Reality (IVR or VR) as a tool in rehabilitation is changing at pace and has far reaching consequences that will increasingly be seen in the claims space.

Combined with AI powered treatment planning and smart home devices for daily rehabilitation, innovative technologies are now evident in all aspects of rehabilitation.

Just as the metaverse industry finally began its breakthrough, ChatGPT’s launch in November 2022 proved a technological avalanche. Amid post-pandemic economic pressures, companies pivoted away from metaverse aspirations to AI adoption, seeking immediate returns through automation and virtualization.

Apple’s 2023 entry into the space with Vision Pro VR headset has similarly faced challenges. At $3,499, with limited content and a sparse developer ecosystem, Apple struggles to find its market. Early projections suggest initial production runs of fewer than 400,000 units.

What if we’ve been looking for the metaverse in the wrong places? While Meta, HTC and Sony have so far struggled to establish their vision of a VR-first digital world, gaming platforms like Roblox quietly built what might be the actual metaverse.

Creating realistic 3D models for applications like virtual reality, filmmaking, and engineering design can be a cumbersome process requiring lots of manual trial and error.

While generative artificial intelligence models for images can streamline artistic processes by enabling creators to produce lifelike 2D images from text prompts, these models are not designed to generate 3D shapes. To bridge the gap, a recently developed technique called Score Distillation leverages 2D image generation models to create 3D shapes, but its output often ends up blurry or cartoonish.

MIT researchers explored the relationships and differences between the algorithms used to generate 2D images and 3D shapes, identifying the root cause of lower-quality 3D models. From there, they crafted a simple fix to Score Distillation, which enables the generation of sharp, high-quality 3D shapes that are closer in quality to the best model-generated 2D images.


The dream of many – to try the taste through a monitor – is getting closer.

Remove ads and unlock PREMIUM content

A team of biomedical engineers and virtual reality experts has developed a groundbreaking lollipop-shaped interface that simulates taste in virtual reality.

The Matrix is a groundbreaking AI model capable of generating infinite, high-quality video worlds in real time, offering unmatched interactivity and adaptability. Developed using advanced techniques like the Video Diffusion Transformer and Swin-DPM, it enables seamless, frame-level precision for creating dynamic, responsive simulations. This innovation surpasses traditional systems, making it a game-changer for gaming, autonomous vehicle testing, and virtual environments.

🔍 Key Topics Covered:
The Matrix AI model and its ability to generate infinite, interactive video worlds.
Real-time applications in gaming, autonomous simulations, and dynamic virtual environments.
Revolutionary AI techniques like Video Diffusion Transformer, Swin-DPM, and Interactive Modules.

🎥 What You’ll Learn:
How The Matrix AI redefines video generation with infinite-length, high-quality simulations.
The transformative impact of real-time interactivity and domain generalization in AI-driven worlds.
Why this breakthrough is a game-changer for industries like gaming, VR, and autonomous systems.

📊 Why This Matters:

Researchers from Seoul National University College of Engineering announced they have developed an optical design technology that dramatically reduces the volume of cameras with a folded lens system utilizing “metasurfaces,” a next-generation nano-optical device.

By arranging metasurfaces on the so that light can be reflected and moved around in the glass substrate in a folded manner, the researchers have realized a with a thickness of 0.7mm, which is much thinner than existing refractive lens systems. The research was published on Oct. 30 in the journal Science Advances.

Traditional cameras are designed to stack multiple glass lenses to refract light when capturing images. While this structure provided excellent high-quality images, the thickness of each lens and the wide spacing between lenses increased the overall bulk of the camera, making it difficult to apply to devices that require ultra-compact cameras, such as virtual and augmented reality (VR-AR) devices, smartphones, endoscopes, drones, and more.

A paper published in Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology, by researchers in Carnegie Mellon University’s Human-Computer Interaction Institute, introduces EgoTouch, a tool that uses artificial intelligence to control AR/VR interfaces by touching the skin with a finger.

It allows multiple users to walk in any direction without colliding, enhancing VR immersion. Developed by Disney Imagineer Lanny Smoot, this innovation could revolutionize VR experiences and stage performances. (Video Credit: Disney Parks/YouTube)