Menu

Blog

Archive for the ‘virtual reality’ category

Nov 18, 2024

AI-based tool creates simple interfaces for virtual and augmented reality

Posted by in categories: augmented reality, robotics/AI, virtual reality

A paper published in Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology, by researchers in Carnegie Mellon University’s Human-Computer Interaction Institute, introduces EgoTouch, a tool that uses artificial intelligence to control AR/VR interfaces by touching the skin with a finger.

Nov 16, 2024

Engineering wiredengineers on October 4, 2024: Disney’s HoloTile is a groundbreaking 360-degree treadmill designed for virtual reality

Posted by in categories: engineering, virtual reality

It allows multiple users to walk in any direction without colliding, enhancing VR immersion. Developed by Disney Imagineer Lanny Smoot, this innovation could revolutionize VR experiences and stage performances. (Video Credit: Disney Parks/YouTube)

Nov 16, 2024

Metalenses harness AI for high-resolution, full-color imaging for compact optical systems

Posted by in categories: augmented reality, mobile phones, robotics/AI, virtual reality

Modern imaging systems, such as those used in smartphones, virtual reality (VR), and augmented reality (AR) devices, are constantly evolving to become more compact, efficient, and high-performing. Traditional optical systems rely on bulky glass lenses, which have limitations like chromatic aberrations, low efficiency at multiple wavelengths, and large physical sizes. These drawbacks present challenges when designing smaller, lighter systems that still produce high-quality images.

Nov 13, 2024

Virtual training uses generative AI to teach robots how to traverse real world terrain

Posted by in categories: physics, robotics/AI, virtual reality

MIT CSAIL researchers have developed a generative AI system, LucidSim, to train robots in virtual environments for real-world navigation. Using ChatGPT and physics simulators, robots learn to traverse complex terrains. This method outperforms traditional training, suggesting a new direction for robotic training.


A team of roboticists and engineers at MIT CSAIL, Institute for AI and Fundamental Interactions, has developed a generative AI approach to teaching robots how to traverse terrain and move around objects in the real world.

Continue reading “Virtual training uses generative AI to teach robots how to traverse real world terrain” »

Nov 11, 2024

Who’s afraid of Artificial Intelligence?

Posted by in categories: robotics/AI, transportation, virtual reality

Artificial Intelligence is everywhere in Europe.

While some are worried about its long-term impact, a team of researchers at the University of Technology in Vienna is working on responsible ways to use AI.

Watch more 👉

Continue reading “Who’s afraid of Artificial Intelligence?” »

Nov 10, 2024

International Conference on Holodecks: Five Key Takeaways

Posted by in categories: biotech/medical, virtual reality

Shaking hands with a character from the Fortnite video game. Visualizing a patient’s heart in 3D—and “feeling” it beat. Touching the walls of the Roman Coliseum—from your sofa in Los Angeles. What if we could touch and interact with things that aren’t physically in front of us? This reality might be closer than we think, thanks to an emerging technology: the holodeck.

The name might sound familiar. In Star Trek’s Next Generation, a holodeck was an advanced 3D virtual reality world that created the illusion of solid objects. Now, immersive technology researchers at USC and beyond are taking us one step closer to making this science fiction concept a science fact.

Continue reading “International Conference on Holodecks: Five Key Takeaways” »

Nov 8, 2024

Frontiers: They basically controlled butterflies in a virtual environment wirelessly with human organoids

Posted by in categories: biological, robotics/AI, virtual reality

Wetware computing and organoid intelligence is an emerging research field at the intersection of electrophysiology and artificial intelligence. The core concept involves using living neurons to perform computations, similar to how Artificial Neural Networks (ANNs) are used today. However, unlike ANNs, where updating digital tensors (weights) can instantly modify network responses, entirely new methods must be developed for neural networks using biological neurons. Discovering these methods is challenging and requires a system capable of conducting numerous experiments, ideally accessible to researchers worldwide. For this reason, we developed a hardware and software system that allows for electrophysiological experiments on an unmatched scale. The Neuroplatform enables researchers to run experiments on neural organoids with a lifetime of even more than 100 days. To do so, we streamlined the experimental process to quickly produce new organoids, monitor action potentials 24/7, and provide electrical stimulations. We also designed a microfluidic system that allows for fully automated medium flow and change, thus reducing the disruptions by physical interventions in the incubator and ensuring stable environmental conditions. Over the past three years, the Neuroplatform was utilized with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data. A dedicated Application Programming Interface (API) has been developed to conduct remote research directly via our Python library or using interactive compute such as Jupyter Notebooks. In addition to electrophysiological operations, our API also controls pumps, digital cameras and UV lights for molecule uncaging. This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries. Furthermore, the infrastructure supports entirely remote use. Currently in 2024, the system is freely available for research purposes, and numerous research groups have begun using it for their experiments. This article outlines the system’s architecture and provides specific examples of experiments and results.

The recent rise in wetware computing and consequently, artificial biological neural networks (BNNs), comes at a time when Artificial Neural Networks (ANNs) are more sophisticated than ever.

The latest generation of Large Language Models (LLMs), such as Meta’s Llama 2 or OpenAI’s GPT-4, fundamentally rely on ANNs.

Nov 4, 2024

Coarse-Grained Simulations of Adeno-Associated Virus and Its Receptor Reveal Influences on Membrane Lipid Organization and Curvature

Posted by in categories: biotech/medical, virtual reality

Adeno-associated virus (AAV) is a well-known gene delivery tool with a wide range of applications, including as a vector for gene therapies. However, the molecular mechanism of its cell entry remains unknown. Here, we performed coarse-grained molecular dynamics simulations of the AAV serotype 2 (AAV2) capsid and the universal AAV receptor (AAVR) in a model plasma membrane environment. Our simulations show that binding of the AAV2 capsid to the membrane induces membrane curvature, along with the recruitment and clustering of GM3 lipids around the AAV2 capsid. We also found that the AAVR binds to the AAV2 capsid at the VR-I loops using its PKD2 and PKD3 domains, whose binding poses differs from previous structural studies. These first molecular-level insights into AAV2 membrane interactions suggest a complex process during the initial phase of AAV2 capsid internalization.

Nov 3, 2024

Disney forms dedicated AI and XR group to coordinate company-wide use and adoption

Posted by in categories: augmented reality, business, robotics/AI, virtual reality

Disney is adding another layer to its AI and extended reality strategies. As first reported by Reuters, the company recently formed a dedicated emerging technologies unit. Dubbed the Office of Technology Enablement, the group will coordinate the company’s exploration, adoption and use of artificial intelligence, AR and VR tech.

It has tapped Jamie Voris, previously the CTO of its Studios Technology division, to oversee the effort. Before joining Disney in 2010, Voris was the chief technology officer at the National Football League. More recently, he led the development of the company’s Apple Vision Pro app. Voris will report to Alan Bergman, the co-chairman of Disney Entertainment. Reuters reports the company eventually plans to grow the group to about 100 employees.

“The pace and scope of advances in AI and XR are profound and will continue to impact consumer experiences, creative endeavors, and our business for years to come — making it critical that Disney explore the exciting opportunities and navigate the potential risks,” Bergman wrote in an email Disney shared with Engadget. “The creation of this new group underscores our dedication to doing that and to being a positive force in shaping responsible use and best practices.”

Oct 2, 2024

Researchers harness liquid crystal structures to design simple, yet versatile bifocal lenses

Posted by in categories: biological, computing, virtual reality

Researchers have developed a new type of bifocal lens that offers a simple way to achieve two foci (or spots) with intensities that can be adjusted by applying external voltage. The lenses, which use two layers of liquid crystal structures, could be useful for various applications such as optical interconnections, biological imaging, augmented/virtual reality devices and optical computing.

Page 1 of 10412345678Last