Nvidia’sOmniverse, billed as a “metaverse for engineers,” has grown to more than 700 companies and 70,000 individual creators that are working on projects to simulate digital twins that replicate real-world environments in a virtual space.
The Omniverse is Nvidia’s simulation and collaboration platform delivering the foundation of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. Omniverse is now moving from beta to general availability, and it has been extended to software ecosystems that put it within reach of 40 million 3D designers.
And today during Nvidia CEO Jensen Huang’s keynote at the Nvidia GTC online conference, Nvidia said it has added features such as Omniverse Replicator, which makes it easier to train AI deep learning neural networks, and Omniverse avatar, which makes it simple to create virtual characters that can be used in the Omniverse or other worlds.
‘Metaverse Seoul’ will let residents visit famous tourist attractions, attend festivals, and even file paperwork with the local council in a virtual reality city hall dailystar.
French startup Lynx launched a Kickstarter campaign for Lynx R-1 in October, a standalone MR headset which is capable of both VR and passthrough AR. Starting at €530 (or $500 if you’re not subject to European sales tax), the MR headset attracted a strong response from backers as it passed its initial funding goal in under 15 hours, going on to garner over $800,000 throughout the month-long campaign.
Update (November 10th, 2021): Lynx R-1 Kickstarter is now over, and it’s attracted €725,281 (~$835,000) from 1,216 backers. In the final hours the campaign managed to pass its first stretch goal at $700,000—a free facial interface pad.
NVIDIA has launched a follow-up to the Jetson AGX Xavier, its $1,100 AI brain for robots that it released back in 2018. The new module, called the Jetson AGX Orin, has six times the processing power of Xavier even though it has the same form factor and can still fit in the palm of one’s hand. NVIDIA designed Orin to be an “energy-efficient AI supercomputer” meant for use in robotics, autonomous and medical devices, as well as edge AI applications that may seem impossible at the moment.
The chipmaker says Orin is capable of 200 trillion operations per second. It’s built on the NVIDIA Ampere architecture GPU, features Arm Cortex-A78AE CPUs and comes with next-gen deep learning and vision accelerators, giving it the ability to run multiple AI applications. Orin will give users access to the company’s software and tools, including the NVIDIA Isaac Sim scalable robotics simulation application, which enables photorealistic, physically-accurate virtual environments where developers can test and manage their AI-powered robots. For users in the healthcare industry, there’s NVIDIA Clara for AI-powered imaging and genomics. And for autonomous vehicle developers, there’s NVIDIA Drive.
The company has yet to reveal what the Orin will cost, but it intends to make the Jetson AGX Orin module and developer kit available in the first quarter of 2022. Those interested can register to be notified about its availability on NVIDIA’s website. The company will also talk about Orin at NVIDIA GTC, which will take place from November 8th through 11th.
Have you ever seen the popular movie called The Matrix? In it, the main character Neo realizes that he and everyone else he had ever known had been living in a computer-simulated reality. But even after taking the red pill and waking up from his virtual world, how can he be so sure that this new reality is the real one? Could it be that this new reality of his is also a simulation? In fact, how can anyone tell the difference between simulated reality and a non-simulated one? The short answer is, we cannot. Today we are looking at the simulation hypothesis which suggests that we all might be living in a simulation designed by an advanced civilization with computing power far superior to ours.
The simulation hypothesis was popularized by Nick Bostrum, a philosopher at the University of Oxford, in 2003. He proposed that members of an advanced civilization with enormous computing power may run simulations of their ancestors. Perhaps to learn about their culture and history. If this is the case he reasoned, then they may have run many simulations making a vast majority of minds simulated rather than original. So, there is a high chance that you and everyone you know might be just a simulation. Do not buy it? There is more!
According to Elon Musk, if we look at games just a few decades ago like Pong, it consisted of only two rectangles and a dot. But today, games have become very realistic with 3D modeling and are only improving further. So, with virtual reality and other advancements, it seems likely that we will be able to simulate every detail of our minds and bodies very accurately in a few thousand years if we don’t go extinct by then. So games will become indistinguishable from reality with an enormous number of these games. And if this is the case he argues, “then the odds that we are in base reality are 1 in billions”.
There are other reasons to think we might be in a simulation. For example, the more we learn about the universe, the more it appears to be based on mathematical laws. Max Tegmark, a cosmologist at MIT argues that our universe is exactly like a computer game which is defined by mathematical laws. So for him, we may be just characters in a computer game discovering the rules of our own universe.
Apple is looking into how it may change how you view AR (augmented reality) altogether…literally. Instead of projecting an image onto a lens, which is viewed by someone wearing an AR headset or glasses, Apple envisions beaming the image directly onto the user’s eyeball itself.
Apple recently unveiled its upcoming lineup of new products. What it did not showcase, however, was revealed in a recent patent—Apple is shown researching how it can change how we see AR and the future of its “Apple Glass” product, if one comes to exist. The patent reveals how Apple intends to move away from the traditional way of projecting an image onto a lens, to projecting the image directly onto the retina of the wearer. This will be achieved through the use of micro projectors.
The issue that Apple is trying to avoid is the nausea and headaches some people experience while viewing AR and VR (virtual reality). The patent states the issue as “accommodation-convergence mismatch,” which causes eyestrain for some. Apple hopes that by using its “Direct Retinal Projector” it can alleviate those symptoms and make the AR and VR realm accessible for more users.
With VR data they’ve got data about 100 per cent of your experience — how you saw it, where you looked. The next generation of Facebook’s VR headset is going to have eye tracking.
This is probably the most invasive surveillance technology we’re going to bring into our homes in the next decade.
Facebook’s pivot was met with plenty of scepticism, with critics saying the timing points to a cynical rebrand designed to distance the company from Facebook’s rolling scandals. Others have argued the metaverse already exists as a graveyard strewn with ideas like Google Glass smart glasses, which have failed to catch on. But with Zuckerberg pledging to invest at least $US10 billion this year on metaverse development and proposing to hire 10,000 workers across the European Union over the next five years, there is a looming question for policymakers about how this ambition can or should be regulated.
The Rise of actually real and useful Nanobots making use of the rapidly advancing miniaturization of robotics and microchips through companies such as TSMC, Intel and Samsung. These nanobots are soon going to enable things such as full dive virtual reality, healing diseases such as cancer and potentially even increasing the longevity up to 200 years. These tiny computer/robots will enter our bloodstream and cross the blood brain barrier to read and write similar to how Brain Computer Interfaces such as Neuralink currently work. The future of technology is looking really exciting.
If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile – TIMESTAMPS: 00:00 Have we reached the Nanobot-Era? 02:51 The Applications of Nanobots. 04:26 All the types of BCI’s. 06:44 So, when will there be Nanobots? 09:13 Last Words. – #nanobots #ai #nanotechnology
VR can soon become perceptually indistinguishable from the physical reality, even superior in many practical ways, and any artificially created “imaginary” world with a logically consistent ruleset of physics would be ultrarealistic. Advanced immersive technologies incorporating quantum computing, AI, cybernetics, optogenetics and nanotech would make this a new “livable” reality within the next few decades. Can this new immersive tech help us decipher the nature of our own “b… See more.
It takes seven months to get to Mars in an efficiently engineered spaceship, covering the distance of 480 million kilometers. On this journey, a crew would have to survive in a confined space with no opportunity to experience nature or interact with new people. It is easy to imagine how this much isolation could have a severe impact on the crew’s well-being and productivity.
The challenges long-duration space travelers experience are not foreign to regular folk, although to … See more.