Menu

Blog

Archive for the ‘virtual reality’ category: Page 34

Nov 12, 2021

Stanford Uses AI To Make Holographic Displays Look Even More Like Real Life

Posted by in categories: robotics/AI, virtual reality

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds, and experiences. While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published today (November 12, 2021) in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December.

At its core, this research confronts the fact that current augmented and virtual reality displays only show 2D images to each of the viewer’s eyes, instead of 3D – or holographic – images like we see in the real world.

“They are not perceptually realistic,” explained Gordon Wetzstein, associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab. Wetzstein and his colleagues are working to come up with solutions to bridge this gap between simulation and reality while creating displays that are more visually appealing and easier on the eyes.

Nov 12, 2021

Varjo’s Unique Masking Tool Lets You Bring the Real World into VR

Posted by in categories: augmented reality, virtual reality

Varjo’s XR-3 headset has perhaps the best passthrough view of any MR headset on the market thanks to color cameras that offer a fairly high resolution and a wide field-of-view. But rather than just using the passthrough view for AR (bringing virtual objects into the real world) Varjo has developed a new tool to do the reverse (bringing real objects into the virtual world).

At AWE 2021 this week I got my first glimpse at ‘Varjo Lab Tools’, a soon-to-be released software suite that will work with the company’s XR-3 mixed reality headset. The tool allows users to trace arbitrary shapes that then become windows into the real world, while the rest of the view remains virtual.

Nov 12, 2021

Is Elon Musk’s NEURALINK ALREADY OBSOLETE? | Future of Brain Computer Interfaces

Posted by in categories: biotech/medical, Elon Musk, nanotechnology, robotics/AI, virtual reality

Elon Musk’s revolutionary company Neuralink plans to insert Computer Chips into peoples brains but what if there’s a safer and even more performant way of merging humans and machines in the future?
Enter DARPAs plan to help the emergence of non-invasive brain computer interfaces which led to the organization Battelle to create a kind of Neural Dust to interface with our brains that might be the first step to having Nanobots inside of the human body in the future.

How will Neuralink deal with that potential rival with this cutting edge technology? Its possibilities in Fulldive Virtual Reality Games, Medical Applications, merging humans with artificial intelligence and its potential to scale all around the world are enormous.

Continue reading “Is Elon Musk’s NEURALINK ALREADY OBSOLETE? | Future of Brain Computer Interfaces” »

Nov 11, 2021

Nvidia’s Omniverse adds AR/VR viewing, AI training, and AI avatar creation

Posted by in categories: augmented reality, robotics/AI, virtual reality

Nvidia’s Omniverse, billed as a “metaverse for engineers,” has grown to more than 700 companies and 70,000 individual creators that are working on projects to simulate digital twins that replicate real-world environments in a virtual space.

The Omniverse is Nvidia’s simulation and collaboration platform delivering the foundation of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. Omniverse is now moving from beta to general availability, and it has been extended to software ecosystems that put it within reach of 40 million 3D designers.

And today during Nvidia CEO Jensen Huang’s keynote at the Nvidia GTC online conference, Nvidia said it has added features such as Omniverse Replicator, which makes it easier to train AI deep learning neural networks, and Omniverse avatar, which makes it simple to create virtual characters that can be used in the Omniverse or other worlds.

Nov 10, 2021

South Korean capital Seoul will become world’s first ‘metaverse’ city

Posted by in category: virtual reality

‘Metaverse Seoul’ will let residents visit famous tourist attractions, attend festivals, and even file paperwork with the local council in a virtual reality city hall dailystar.

Nov 10, 2021

Lynx R-1 MR Headset Kickstarter Comes to an End with over $800,000

Posted by in categories: augmented reality, virtual reality

French startup Lynx launched a Kickstarter campaign for Lynx R-1 in October, a standalone MR headset which is capable of both VR and passthrough AR. Starting at €530 (or $500 if you’re not subject to European sales tax), the MR headset attracted a strong response from backers as it passed its initial funding goal in under 15 hours, going on to garner over $800,000 throughout the month-long campaign.

Update (November 10th, 2021): Lynx R-1 Kickstarter is now over, and it’s attracted €725,281 (~$835,000) from 1,216 backers. In the final hours the campaign managed to pass its first stretch goal at $700,000—a free facial interface pad.

Continue reading “Lynx R-1 MR Headset Kickstarter Comes to an End with over $800,000” »

Nov 9, 2021

NVIDIA’s new AI brain for robots is six times more powerful than its predecessor

Posted by in categories: biotech/medical, robotics/AI, supercomputing, virtual reality

NVIDIA has launched a follow-up to the Jetson AGX Xavier, its $1,100 AI brain for robots that it released back in 2018. The new module, called the Jetson AGX Orin, has six times the processing power of Xavier even though it has the same form factor and can still fit in the palm of one’s hand. NVIDIA designed Orin to be an “energy-efficient AI supercomputer” meant for use in robotics, autonomous and medical devices, as well as edge AI applications that may seem impossible at the moment.

The chipmaker says Orin is capable of 200 trillion operations per second. It’s built on the NVIDIA Ampere architecture GPU, features Arm Cortex-A78AE CPUs and comes with next-gen deep learning and vision accelerators, giving it the ability to run multiple AI applications. Orin will give users access to the company’s software and tools, including the NVIDIA Isaac Sim scalable robotics simulation application, which enables photorealistic, physically-accurate virtual environments where developers can test and manage their AI-powered robots. For users in the healthcare industry, there’s NVIDIA Clara for AI-powered imaging and genomics. And for autonomous vehicle developers, there’s NVIDIA Drive.

The company has yet to reveal what the Orin will cost, but it intends to make the Jetson AGX Orin module and developer kit available in the first quarter of 2022. Those interested can register to be notified about its availability on NVIDIA’s website. The company will also talk about Orin at NVIDIA GTC, which will take place from November 8th through 11th.

Nov 6, 2021

The Simulation Hypothesis | Is Anything ‘Real’

Posted by in categories: computing, cosmology, Elon Musk, entertainment, mathematics, particle physics, virtual reality

Have you ever seen the popular movie called The Matrix? In it, the main character Neo realizes that he and everyone else he had ever known had been living in a computer-simulated reality. But even after taking the red pill and waking up from his virtual world, how can he be so sure that this new reality is the real one? Could it be that this new reality of his is also a simulation? In fact, how can anyone tell the difference between simulated reality and a non-simulated one? The short answer is, we cannot. Today we are looking at the simulation hypothesis which suggests that we all might be living in a simulation designed by an advanced civilization with computing power far superior to ours.

The simulation hypothesis was popularized by Nick Bostrum, a philosopher at the University of Oxford, in 2003. He proposed that members of an advanced civilization with enormous computing power may run simulations of their ancestors. Perhaps to learn about their culture and history. If this is the case he reasoned, then they may have run many simulations making a vast majority of minds simulated rather than original. So, there is a high chance that you and everyone you know might be just a simulation. Do not buy it? There is more!

Continue reading “The Simulation Hypothesis | Is Anything ‘Real’” »

Nov 6, 2021

Apple Patent Outlines Direct Retinal Projection To Beam AR Images Onto Your Eyeballs

Posted by in categories: augmented reality, virtual reality

Apple is looking into how it may change how you view AR (augmented reality) altogether…literally. Instead of projecting an image onto a lens, which is viewed by someone wearing an AR headset or glasses, Apple envisions beaming the image directly onto the user’s eyeball itself.

Apple recently unveiled its upcoming lineup of new products. What it did not showcase, however, was revealed in a recent patent—Apple is shown researching how it can change how we see AR and the future of its “Apple Glass” product, if one comes to exist. The patent reveals how Apple intends to move away from the traditional way of projecting an image onto a lens, to projecting the image directly onto the retina of the wearer. This will be achieved through the use of micro projectors.

The issue that Apple is trying to avoid is the nausea and headaches some people experience while viewing AR and VR (virtual reality). The patent states the issue as “accommodation-convergence mismatch,” which causes eyestrain for some. Apple hopes that by using its “Direct Retinal Projector” it can alleviate those symptoms and make the AR and VR realm accessible for more users.

Nov 5, 2021

Invasive surveillance: Are regulators ready to deal with Facebook’s ‘metaverse’?

Posted by in categories: surveillance, virtual reality

With VR data they’ve got data about 100 per cent of your experience — how you saw it, where you looked. The next generation of Facebook’s VR headset is going to have eye tracking.

This is probably the most invasive surveillance technology we’re going to bring into our homes in the next decade.

Facebook’s pivot was met with plenty of scepticism, with critics saying the timing points to a cynical rebrand designed to distance the company from Facebook’s rolling scandals. Others have argued the metaverse already exists as a graveyard strewn with ideas like Google Glass smart glasses, which have failed to catch on. But with Zuckerberg pledging to invest at least $US10 billion this year on metaverse development and proposing to hire 10,000 workers across the European Union over the next five years, there is a looming question for policymakers about how this ambition can or should be regulated.

Page 34 of 102First3132333435363738Last