Toggle light / dark theme

For over a century, galvanic vestibular stimulation (GVS) has been used as a way to stimulate the inner ear nerves by passing a small amount of current.

We use GVS in a two player escape the room style VR game set in a dark virtual world. The VR player is remote controlled like a robot by a non-VR player with GVS to alter the VR player’s walking trajectory. We also use GVS to induce the physical sensations of virtual motion and mitigate motion sickness in VR.

Brain hacking has been a futurist fascination for decades. Turns out, we may be able to make it a reality as research explores the impact of GVS on everything from tactile sensation to memory.

Misha graduated in June 2018 from the MIT Media Lab where she worked in the Fluid Interfaces group with Prof Pattie Maes. Misha works in the area of human-computer interaction (HCI), specifically related to virtual, augmented and mixed reality. The goal of her work is to create systems that use the entire body for input and output and automatically adapt to each user’s unique state and context. Misha calls her concept perceptual engineering, i.e., immersive systems that alter the user’s perception (or more specifically the input signals to their perception) and influence or manipulate it in subtle ways. For example, they modify a user’s sense of balance or orientation, manipulate their visual attention and more, all without the user’s explicit awareness, and in order to assist or guide their interactive experience in an effortless way.

The systems Misha builds use the entire body for input and output, i.e., they can use movement, like walking, or a physiological signal, like breathing as input, and can output signals that actuate the user’s vestibular system with electrical pulses, causing the individual to move or turn involuntarily. HCI up to now has relied upon deliberate, intentional usage, both for input (e.g., touch, voice, typing) and for output (interpreting what the system tells you, shows you, etc.). In contrast, Misha develops techniques and build systems that do not require this deliberate, intentional user interface but are able to use the body as the interface for more implicit and natural interactions.

Misha’s perceptual engineering approach has been shown to increase the user’s sense of presence in VR/MR, provide novel ways to communicate between the user and the digital system using proprioception and other sensory modalities, and serve as a platform to question the boundaries of our sense of agency and trust.

An international team of scientists developed augmented reality glasses with technology to receive images beamed from a projector, to resolve some of the existing limitations of such glasses, such as their weight and bulk. The team’s research is being presented at the IEEE VR conference in Saint-Malo, France, in March 2025.

Augmented reality (AR) technology, which overlays and virtual objects on an image of the real world viewed through a device’s viewfinder or , has gained traction in recent years with popular gaming apps like Pokémon Go, and real-world applications in areas including education, manufacturing, retail and health care. But the adoption of wearable AR devices has lagged over time due to their heft associated with batteries and electronic components.

AR glasses, in particular, have the potential to transform a user’s physical environment by integrating virtual elements. Despite many advances in hardware technology over the years, AR glasses remain heavy and awkward and still lack adequate computational power, battery life and brightness for optimal user experience.

Meta has unveiled the next iteration of its sensor-packed research eyewear, the Aria Gen 2. This latest model follows the initial version introduced in 2020. The original glasses came equipped with a variety of sensors but lacked a display, and were not designed as either a prototype or a consumer product. Instead, they were exclusively meant for research to explore the types of data that future augmented reality (AR) glasses would need to gather from their surroundings to provide valuable functionality.

In their Project Aria initiative, Meta explored collecting egocentric data—information from the viewpoint of the user—to help train artificial intelligence systems. These systems could eventually comprehend the user’s environment and offer contextually appropriate support in daily activities. Notably, like its predecessor, the newly announced Aria Gen 2 does not feature a display.

Meta has highlighted several advancements in Aria Gen 2 compared to the first generation:

A research team led by Professor Takayuki Hoshino of Nagoya University’s Graduate School of Engineering in Japan has demonstrated the world’s smallest shooting game by manipulating nanoparticles in real time, resulting in a game that is played with particles approximately 1 billionth of a meter in size.

This research is a significant step toward developing a computer interface system that seamlessly integrates virtual objects with real nanomaterials. They published their study in the Japanese Journal of Applied Physics.

The game demonstrates what the researchers call “nano-mixed reality (MR),” which integrates digital technology with the physical nanoworld in real time using high-speed electron beams. These beams generate dynamic patterns of electric fields and on a display surface, allowing researchers to control the force field acting on the nanoparticles in real time to move and manipulate them.

February 2025 features Comet CK-25, observed with AI-driven telescopic networks for real-time imaging and analysis. A spectacular planetary alignment of Mercury, Venus, and Mars will be enhanced by augmented reality devices for interactive viewing. A partial lunar eclipse will occur on February 27th-28th, with an immersive experience via the Virtual Lunar Observation Platform (VLOP). Technological advancements highlight new methods of observing and interacting with space events, bridging Earth and the cosmos. February 2025 is set to mesmerize stargazers and tech enthusiasts alike, as the cosmos aligns with cutting-edge advancements in astronomical observation. This month isn’t just about celestial spectacles; it’s about witnessing how new technology is redefining our view of space from Earth.

Antennas receive and transmit electromagnetic waves, delivering information to our radios, televisions, cellphones and more. Researchers in the McKelvey School of Engineering at Washington University in St. Louis imagines a future where antennas reshape even more applications.

Their new metasurfaces, ultra-thin materials made of tiny nanoantennas that can both amplify and control light in very precise ways, could replace conventional refractive surfaces from eyeglasses to smartphone lenses and improve dynamic applications such as augmented reality/ and LiDAR ( and ranging).

While metasurfaces can manipulate light very precisely and efficiently, enabling powerful optical devices, they often suffer from a major limitation: Metasurfaces are highly sensitive to the , meaning they can only interact with light that is oriented and traveling in a certain direction. While this is useful in polarized sunglasses that block glare and in other communications and imaging technologies, requiring a specific polarization dramatically reduces the flexibility and applicability of metasurfaces.

Augmented reality (AR) has become a hot topic in the entertainment, fashion, and makeup industries. Though a few different technologies exist in these fields, dynamic facial projection mapping (DFPM) is among the most sophisticated and visually stunning ones. Briefly put, DFPM consists of projecting dynamic visuals onto a person’s face in real-time, using advanced facial tracking to ensure projections adapt seamlessly to movements and expressions.

While imagination should ideally be the only thing limiting what’s possible with DFPM in AR, this approach is held back by technical challenges. Projecting visuals onto a moving face implies that the DFPM system can detect the user’s facial features, such as the eyes, nose, and mouth, within less than a millisecond.

Even slight delays in processing or minuscule misalignments between the camera’s and projector’s image coordinates can result in projection errors—or “misalignment artifacts”—that viewers can notice, ruining the immersion.

Meta Platforms is assembling a specialized team within its Reality Labs division, led by Marc Whitten, to develop the AI, sensors, and software that could power the next wave of humanoid robots.

S platform capabilities. + s social media platforms. We believe expanding our portfolio to invest in this field will only accrue value to Meta AI and our mixed and augmented reality programs, Bosworth said. + How is Meta planning to advance its robotics work?

S CTO Andrew Bosworth. Bloomberg News reported the hiring first. + Meta has also appointed John Koryl as vice president of retail. Koryl, the former CEO of second-hand e-commerce platform The RealReal, will focus on boosting direct sales of Meta’s Quest mixed reality headsets and AI wearables, including Ray-Ban Meta smart glasses, developed in partnership with EssilorLuxottica.

S initial play is to become the backbone of the industry similar to what Google The company has already started talks with robotics firms like Unitree Robotics and Figure AI. With plans to hire 100 engineers this year and billions committed to AI and AR/VR, Meta is placing a major bet on humanoid robots as the next leap in smart home technology.


Meta is stepping into humanoid robotics, competing with firms like Nvidia-backed, Figure AI, and Tesla Robotics.

In today’s AI news, OpenAI will ship GPT-5 in a matter of months and streamline its AI models into more unified products, said CEO Sam Altman in an update. Specifically, Altman says the company plans to launch GPT-4.5 as its last non-chain-of-thought model and integrate its latest o3 reasoning model into GPT-5.

In other advancements, Harvey, a San Francisco AI startup focused on the legal industry, has raised $300 million in a funding round led by Sequoia that values the startup at $3 billion — double the amount investors valued it at in July. The Series D funding round builds on the momentum and reflects investors’ enthusiasm for AI tools …

Meanwhile, Meta is in talks to acquire South Korean AI chip startup FuriosaAI, according to people familiar with the matter, a deal that could boost the social media giant’s custom chip efforts amid a shortage of Nvidia chips and a growing demand for alternatives. The deal could be completed as early as this month.

Then, AI took another step into Hollywood today with the launch of a new filmmaking tool from showbiz startup Flawless. The product — named DeepEditor — promises cinematic wizardry for the digital age. For movie makers, the tool offers photorealistic edits without a costly return to set.

In videos, join IBM’s Boris Sobolev as he explains how model customization can enhance reliability and decision-making of agentic systems. Discover practical tips for data collection, tool use, and pushing the boundaries of what your AI can achieve. Supercharge your AI agents for peak performance!

CEO and cofounder Andrew Feldman about his startup Then, Moderator Marc Pollefeys (Professor of Computer Science at ETH Zurich and Director of the Microsoft Mixed Reality and AI Lab in Zurich) leads an expert panel. The discussion will focus on advancements in robotics and the impact of embodied AI in complex, real-world scenarios. Speakers include; Marco Hutter (Director of the Robotic Systems Lab at ETH and Senior Director of Research at the AI Institute) Péter Fankhauser (Co-Founder & CEO at ANYbotics) Raquel Urtasun (Founder & CEO at Waabi and Professor of Computer Science at the University of Toronto).

We close out with, Databricks CEO Ali Ghodsi speaks exclusively with Worldwide Exchange Anchor Frank Holland about the new partnership announced on Thursday between the data analytics startup and the European tech giant.