Toggle light / dark theme

In Korea, scientists are turning to better ways for improving our screen time, and this means 3D printing something most of us know little about: quantum dots. Focusing on refining the wonders of virtual reality and other electronic displays even further, researchers from the Nano Hybrid Technology Research Center of Korea Electrotechnology Research Institute (KERI), a government-funded research institute under National Research Council of Science & Technology (NST) of the Ministry of Science and ICT (MSIT), have created nanophotonic 3D printing technology for screens. Meant to be used with virtual reality, as well as TVs, smartphones, and wearables, high resolution is achieved due to a 3D layout expanding the density and quality of the pixels.

Led by Dr. Jaeyeon Pyo and Dr. Seung Kwon Seol, the team has published the results of their research and development in “3D-Printed Quantum Dot Nanopixels.” While pixels are produced to represent data in many electronics, conventionally they are created with 2D patterning. To overcome limitations in brightness and resolution, the scientists elevated this previously strained technology to the next level with 3D printed quantum dots to be contained within polymer nanowires.

In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office. The AI could then be ported into a robot, which would gain the smarts to navigate through the real world without crashing.

In the year since, FAIR has rapidly pushed the boundaries of its work on “embodied AI.” In a blog post today, the lab has announced three additional milestones reached: two new algorithms that allow an agent to quickly create and remember a map of the spaces it navigates, and the addition of sound on the platform to train the agents to hear.

Researchers have fashioned ultrathin silicon nanoantennas that trap and redirect light, for applications in quantum computing, LIDAR and even the detection of viruses.

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on August 17, 2020, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on Aug. 17, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

“We’re essentially trying to trap light in a tiny box that still allows the light to come and go from many different directions,” said postdoctoral fellow Mark Lawrence, who is also lead author of the paper. “It’s easy to trap light in a box with many sides, but not so easy if the sides are transparent—as is the case with many Silicon-based applications.”

Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak.

Researchers at Carnegie Mellon University (CMU) have recently carried out a study aimed at improving how and robots communicate with humans by generating to accompany their speech. Their paper, pre-published on arXiv and set to be presented at the European Conference on Computer Vision (ECCV) 2020, introduces Mix-StAGE, a new that can produce different styles of co-speech gestures that best match the voice of a and what he/she is saying.

“Imagine a situation where you are communicating with a friend in a through a ,” Chaitanya Ahuja, one of the researchers who carried out the study, told TechXplore. “The headset is only able to hear your voice, but not able to see your hand gestures. The goal of our model is to predict the accompanying the speech.”

The Universe or any other phenomenon or entity contained therein is not objectively real but subjectively real. Patterns of information emerging from the ultimate code are what is more fundamental than particles of matter or space-time continuum itself all of which is levels below the Code. Nature behaves quantum code-theoretically at all levels. It’s hierarchies of quantum networks all the way down and all the way up. Being part of hierarchical quantum neural networks, a conscious observer system possesses a strange quality: collapsing quantum states of entangled conscious entities and having a privileged interpretation of that. From this perspective, entangled conscious agents would be a mirror conscious environment, whereas the quantum observer would be a central node of the entangled network.


“If we accept that the material universe as we know it is not a mechanical system but a virtual reality created by Absolute Consciousness through an infinitely complex orchestration of experiences, what are the practical consequences of this insight?” –Stanislav Grof

Just like absolute idealism, solipsism certainly defies our common sense but the deeper layer of truth is not what first meets the eye. Here’s what Richard Conn Henry and Stephen Palmquist write in their paper “An Experimental Test of Non-local Realism” (2007): “Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the illusion of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism.” One can extend their line of reasoning by arriving at pantheistic solipsism as a likely revelation to ponder about.

Our minds operate in the domains of subjectivity, intersubjectivity and supersubjectivity. In the domain of intersubjectivity, minds create a reality by sharing “mindspace,” i.e. shared belief systems and ways of communication, minds then inhabit the reality which they have created. At the level of your individual mind, i.e. local consciousness, you play a multi-level virtual reality game of life but we all invariably converge at the Omega Singularity by forging our own discrete pathways to the ultimate divine. As you’re reading this right now, you’re now in your own subjective reality tunnel leading to the Source and back where you’re now all of which is definable as a parallel evolutionary feedback process within non-local holistic consciousness patterning this virtual multiverse.

Color is a foundational aspect of visual experience that aids in segmenting objects, identifying food sources, and signaling emotions. Intuitively, it feels that we are immersed in a colorful world that extends to the farthest limits of our periphery. How accurate is our intuition? Here, we used gaze-contingent rendering in immersive VR to reveal the limits of color awareness during naturalistic viewing. Observers explored 360° real-world environments, which we altered so that only the regions where observers looked were in color, while their periphery was black-and-white. Overall, we found that observers routinely failed to notice when color vanished from the majority of their visual world. These results show that our intuitive sense of a rich, colorful world is largely incorrect.

Color ignites visual experience, imbuing the world with meaning, emotion, and richness. As soon as an observer opens their eyes, they have the immediate impression of a rich, colorful experience that encompasses their entire visual world. Here, we show that this impression is surprisingly inaccurate. We used head-mounted virtual reality (VR) to place observers in immersive, dynamic real-world environments, which they naturally explored via saccades and head turns. Meanwhile, we monitored their gaze with in-headset eye tracking and then systematically altered the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. We found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color.

UC Santa Barbara researchers continue to push the boundaries of LED design a little further with a new method that could pave the way toward more efficient and versatile LED display and lighting technology.

In a paper published in Nature Photonics, UCSB electrical and computer engineering professor Jonathan Schuller and collaborators describe this new approach, which could allow a wide variety of LED devices—from to automotive lighting—to become more sophisticated and sleeker at the same time.

“What we showed is a new kind of photonic architecture that not only allows you to extract more photons, but also to direct them where you want,” said Schuller. This improved performance, he explained, is achieved without the external packaging components that are often used to manipulate the emitted by LEDs.