Toggle light / dark theme

Over the past several decades, researchers have moved from using electric currents to manipulating light waves in the near-infrared range for telecommunications applications such as high-speed 5G networks, biosensors on a chip, and driverless cars. This research area, known as integrated photonics, is fast evolving and investigators are now exploring the shorter—visible—wavelength range to develop a broad variety of emerging applications. These include chip-scale LIDAR (light detection and ranging), AR/VR/MR (augmented/virtual/mixed reality) goggles, holographic displays, quantum information processing chips, and implantable optogenetic probes in the brain.

The one device critical to all these applications in the is an optical phase modulator, which controls the phase of a light wave, similar to how the phase of radio waves is modulated in wireless computer networks. With a phase modulator, researchers can build an on-chip that channels light into different waveguide ports. With a large network of these optical switches, researchers could create sophisticated integrated optical systems that could control light propagating on a tiny chip or light emission from the chip.

But phase modulators in the visible range are very hard to make: there are no materials that are transparent enough in the visible spectrum while also providing large tunability, either through thermo-optical or electro-optical effects. Currently, the two most suitable materials are silicon nitride and lithium niobate. While both are highly transparent in the visible range, neither one provides very much tunability. Visible-spectrum phase modulators based on these materials are thus not only large but also power-hungry: the length of individual waveguide-based modulators ranges from hundreds of microns to several mm and a single modulator consumes tens of mW for phase tuning. Researchers trying to achieve large-scale integration—embedding thousands of devices on a single microchip—have, up to now, been stymied by these bulky, energy-consuming devices.

As part of its recently announced rebranding, Facebook is doubling down on its vision of the metaverse, an immersive virtual-reality environment for gaming, work meetings, and socializing. In promotional materials, Mark Zuckerberg and his friends enter the metaverse via the company’s own Oculus headsets, and are transformed into cartoon-y animated torsos, often while arranged around a virtual boardroom.

According to Zuckerberg, the metaverse promises an at-work reality better than our own, with lush backdrops and infinite personal customization (as long as that customization stops at the waist for humanoid characters). Borrowing elements from world-building games and environments like Second Life and Fortnite, and inspiration from science-fiction referents like Ready Player One and the Matrix, the insinuation is that working within the metaverse will be fun. (This despite the irony that all of these virtual worlds are positioned as dystopias by their creators.)

Full Story:

Working at the intersection of hardware and software engineering, researchers are developing new techniques for improving 3D displays for virtual and augmented reality technologies.

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds and experiences.

While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December.

WIRED sat down with West to sift fantasy from reality and pin down what XR is actually good at. And it may come as a surprise that a lot of it relies on collecting a lot of data. The following interview is a transcript of our conversation, lightly edited for clarity and length.

WIRED: So let’s start with sort of an ontological question. There’s been this idea that we’ll be in or go to the metaverse, or several metaverses, which tech companies posit will exist in VR or AR. Do you see VR and AR as being more of a tool or a destination?

Timoni West: That’s a great question. I would actually say neither. I see XR as one of the many different mediums you could choose to work in. For example, we actually have an AR mobile companion app [in beta] that allows you to scan a space and gray box it out, put down objects, automatically tag things. So I’m using AR to do the things that AR is best for. I’ll use VR to do the things that VR is best for, like presence, being able to meet together, sculpt, or do anything that’s, you know, sort of intrinsically 3D.

We explore human enhancement and personal performance hacking with Matt Ward (@mattwardio), host of The Disruptors podcast, startup investor, adviser, and business innovation consultant. Matt and I thought it would be fun to do two episodes, one here on MIND & MACHINE and the other on The Disruptors, where we explore what we’ve learned, the ideas we’ve formed and our takeaways across all these different fields that we cover.

So with this episode here on MIND & MACHINE, we focus on human enhancement — technologies that are extending lifespan and enhancing human capability. Then we get into what Matt and I are doing currently to maximize our own performance capabilities — our ability to think more clearly, to live more energetic vibrant lives… which is all heavily informed by all these amazing guests across these different fields that we explore.

In the other part of this discussion, on The Disruptors, we look at another set of subjects from space to AI to Augmented and Virtual reality. So I encourage you to check that out as well at The Disruptors… For the other part of the Conversation on The Disruptors: https://is.gd/mv1Vez https://youtu.be/PtpwgTr4GSU __________ MIND & MACHINE features interviews by August Bradley with bold thinkers and leaders in transformational technologies. Subscribe to the MIND & MACHINE newsletter: https://www.mindandmachine.io/newsletter MIND & MACHINE Website: https://www.MindAndMachine.io Subscribe to the podcast on: iTunes: https://www.mindandmachine.io/itunes Android or Other Apps: https://www.mindandmachine.io/android Show Host August Bradley on Twitter: https://twitter.com/augustbradley _____________________________

For the other part of the Conversation on The Disruptors:

An innovator in early AR systems has a dire prediction: the metaverse could change the fabric of reality as we know it.

Louis Rosenberg, a computer scientist and developer of the first functional AR system at the Air Force Research Laboratory, penned an op-ed in Big Think this weekend that warned the metaverse — an immersive VR and AR world currently being developed by The Company Formerly Known as Facebook — could create what sounds like a real life cyberpunk dystopia.

“I am concerned about the legitimate uses of AR by the powerful platform providers that will control the infrastructure,” Rosenberg wrote in the essay.

When comparing Meta — formerly Facebook — and Microsoft’s approaches to the metaverse, it’s clear Microsoft has a much more grounded and realistic vision. Although Meta currently leads in the provision of virtual reality (VR) devices (through its ownership of what was previously called Oculus), Microsoft is adapting technologies that are currently more widely used. The small, steady steps Microsoft is making today put it in a better position to be one of the metaverse’s future leaders. However, such a position comes with responsibilities, and Microsoft needs to be prepared to face them.

The metaverse is a virtual world where users can share experiences and interact in real-time within simulated scenarios. To be clear, no one knows yet what it will end up looking like, what hardware it will use, or which companies will be the main players — these are still early days. However, what is certain is that VR will play a key enabling role; VR-related technologies such as simultaneous location and mapping (SLAM), facial recognition, and motion tracking will be vital for developing metaverse-based use cases.

Full Story:

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds, and experiences. While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published today (November 12, 2021) in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December.

At its core, this research confronts the fact that current augmented and virtual reality displays only show 2D images to each of the viewer’s eyes, instead of 3D – or holographic – images like we see in the real world.

“They are not perceptually realistic,” explained Gordon Wetzstein, associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab. Wetzstein and his colleagues are working to come up with solutions to bridge this gap between simulation and reality while creating displays that are more visually appealing and easier on the eyes.

Varjo’s XR-3 headset has perhaps the best passthrough view of any MR headset on the market thanks to color cameras that offer a fairly high resolution and a wide field-of-view. But rather than just using the passthrough view for AR (bringing virtual objects into the real world) Varjo has developed a new tool to do the reverse (bringing real objects into the virtual world).

At AWE 2021 this week I got my first glimpse at ‘Varjo Lab Tools’, a soon-to-be released software suite that will work with the company’s XR-3 mixed reality headset. The tool allows users to trace arbitrary shapes that then become windows into the real world, while the rest of the view remains virtual.

Elon Musk’s revolutionary company Neuralink plans to insert Computer Chips into peoples brains but what if there’s a safer and even more performant way of merging humans and machines in the future?
Enter DARPAs plan to help the emergence of non-invasive brain computer interfaces which led to the organization Battelle to create a kind of Neural Dust to interface with our brains that might be the first step to having Nanobots inside of the human body in the future.

How will Neuralink deal with that potential rival with this cutting edge technology? Its possibilities in Fulldive Virtual Reality Games, Medical Applications, merging humans with artificial intelligence and its potential to scale all around the world are enormous.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

#neuralink #ai #elonmusk.

Credits:

https://www.youtube.com/watch?v=PhzDIABahyc.
https://www.bensound.com/