Toggle light / dark theme

Inspirational speaker and Amazon best-selling author Sanjo Jendayi once said, “Listening doesn’t always equate to hearing. Hearing doesn’t always lead to understanding, but active listening helps each person truly ‘see’ the other.”

Jendayi was providing a little philosophical advice during a motivational speech, and technology was likely the last thing on her mind. But her words in fact might best describe the notion behind groundbreaking advances by the Facebook Reality Labs Research (FRLR) team’s top scientists, programmers and designers.

A post on the FRLR web site last week provided a peek into where the social media giant is heading in the world of augmented reality and virtual reality.

Amanda Christensen, ideaXme guest contributor, fake news and deepfake researcher and Marketing Manager at Cubaka, interviews Dan Mapes, PhD, MBA co-founder of VERSES.io and co-author of The Spatial Web: How Web 3.0 Will Connect Humans, Machines, and AI to Transform the World.

Amanda Christensen Comments:

We’ve come a long way since the invention of the internet, and even further since the invention of the first computer, which together have undeniably significantly facilitated everyday life. We have never had access to more information at the touch of our fingers, or been more connected than we are now.

However, the exponential advancement of the internet has brought along with it a whole host of problems, such as the rampant spread of fake news, deep fake technology, significant data breaches, and hacking, to name a few.

Researchers have fashioned ultrathin silicon nanoantennas that trap and redirect light, for applications in quantum computing, LIDAR and even the detection of viruses.

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on August 17, 2020, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on Aug. 17, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

“We’re essentially trying to trap light in a tiny box that still allows the light to come and go from many different directions,” said postdoctoral fellow Mark Lawrence, who is also lead author of the paper. “It’s easy to trap light in a box with many sides, but not so easy if the sides are transparent—as is the case with many Silicon-based applications.”

“It’s not just about the smell,” said Adrian Cheok, one of the scientists behind the experiments. “It is part of a whole, integrated virtual reality or augmented reality. So, for example, you could have a virtual dinner with your friend through the internet. You can see them in 3D and also share a glass of wine together.”

In real life, odors are transmitted when airborne molecules waft into the nose, prompting specialized nerve cells in the upper airway to fire off impulses to the brain. In the recent experiments, performed on 31 test subjects at the Imagineering Institute in the Malaysian city of Nusajaya, researchers used electrodes in the nostrils to deliver weak electrical currents above and behind the nostrils, where these neurons are found.

The researchers were able to evoke 10 different virtual odors, including fruity, woody and minty.

People, bicycles, cars or road, sky, grass: Which pixels of an image represent distinct foreground persons or objects in front of a self-driving car, and which pixels represent background classes?

This task, known as panoptic segmentation, is a fundamental problem that has applications in numerous fields such as self-driving cars, robotics, augmented reality and even in biomedical image analysis.

At the Department of Computer Science at the University of Freiburg Dr. Abhinav Valada, Assistant Professor for Robot Learning and member of BrainLinks-BrainTools focuses on this research question. Valada and his team have developed the state-of-the-art “EfficientPS” artificial intelligence (AI) model that enables coherent recognition of visual scenes more quickly and effectively.