Swave Photonics has designed holographic chips on a proprietary diffractive optics technology to “bring the metaverse to life.”
Can virtual reality become indistinguishable from actual reality? Swave Photonics, a spinoff of Imec and Vrije Universiteit Brussel, has designed holographic chips on a proprietary diffractive optics technology to “bring the metaverse to life.” The Leuven, Belgium–based startup has raised €7 million in seed funding to accelerate the development of its multi-patented Holographic eXtended Reality (HXR) technology.
“Our vision is to empower people to visualize the impossible, collaborate, and accomplish more,” Théodore Marescaux, CEO and founder of Swave Photonics, told EE Times Europe. “With our HXR technology, we want to make that extended reality practically indistinguishable from the real world.”
What does it mean to project images that are indistinguishable from reality? “It means a very wide field of view, colors, high dynamic range, the ability to move your head around an object and see it from different angles, and the ability to focus,” he said.
Wearable displacement sensors—which are attached to a human body, detect movements in real time and convert them into electrical signals—are currently being actively studied. However, research on tensile-capable displacement sensors has many limitations, such as low tensile properties and complex manufacturing processes.
If a displacement sensor that can be easily manufactured with high sensitivity and tensile properties is developed, it can be attached to a human body, allowing large movements of joints or fingers to be used in various applications such as AR and VR. A research team led by Sung-Hoon Ahn, mechanical engineering professor at Seoul National University, has developed a piezoelectric strain sensor with high sensitivity and high stretchability based on kirigami design cutting.
In this research, a stretchable piezoelectric displacement sensor was manufactured and its performance was evaluated by applying the kirigami structure to a film-type piezoelectric material. Various sensing characteristics were shown according to the kirigami pattern, and higher sensitivity and tensile properties were shown compared to existing technologies. Wireless haptic gloves using VR technology were produced using the developed sensor, and a piano could be played successfully using them.
Here is brief guide to the new advanced virtual world known as Metaverse by Mark Zuckerberg to transform big tech companies with 3D modelings and a VR environment.
Metaverse is embracing the digital assets like NFT with the integration of VR into its digital platform to provide an immersive experience. Metaverse is set to transform the industries in the global tech market.
Buoyed by research that says inhabiting someone else’s body can change how you perceive your own, researchers have started to investigate the vast, erotic potential of virtual sex.
Despite the fact that sex is a basic instinct and a near-universal experience, we know remarkably little about it. And so, this week, we’re teaming up with our friends at Futurism, oracles of all things science, technology and medicine, to look at the past, present and future of pleasure from a completely scientific perspective.
Sexual psychologist Cathline Smoos spent most of last year having freaky, virtual sex with her two then-boyfriends. Usually, they fooled around together in the trippy, immersive game VRChat, but one day, they decided to explore what it would be like to have sex as objects instead. In their respective bedrooms on different corners of the globe, they strapped on their VR headsets, and embodied two non-human avatars; Smoos chose a chest of drawers, her partner a TV.
Meta’s AI translation work could provide a killer app for AR.
Social media conglomerate Meta has created a single AI model capable of translating across 200 different languages, including many not supported by current commercial tools. The company is open-sourcing the project in the hopes that others will build on its work.
The AI model is part of an ambitious R&D project by Meta to create a so-called “universal speech translator,” which the company sees as important for growth across its many platforms — from Facebook and Instagram, to developing domains like VR and AR. Machine translation not only allows Meta to better understand its users (and so improve the advertising systems that generate 97 percent of its revenue) but could also be the foundation of a killer app for future projects like its augmented reality glasses.
Meet Daniela de Paulis, the newest member of the SETI Institute’s Artist in Residence program. Daniela is a media artist who is also a licensed radio operator and radio telescope operator at the Dwingeloo radio telescope in the Netherlands. Fusing radio technologies, neuroscience, and space research, Daniela creates pioneering art-science projects that include live performances, virtual reality (VR), electroencephalograms (EEG), and audience participation. During this SETI Live chat with SETI AIR Director Bettina Forget, Daniela will discuss her recent works COGITO in Space and OPTICKS, and give us a sneak peek of the project she has planned to complete during her time at the AIR program.
“a unified and shared software infrastructure to empower enterprise customers to build and run scalable, high-quality eXtended Reality (XR) – Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) – applications in public, private, and hybrid clouds.”
What does that all mean?
Simply, GridRaster creates spatial, high-fidelity maps of three-dimensional physical objects. So if you plan to build an automobile or aircraft, use the software to capture an image and create a detailed mesh model overlay that can be viewed using a VR headset. The mesh model can be shared with robots and other devices.
Today we dive into a video that has been on my mind and in the words for a LONG time. For the first time I put together all of the world’s craziest VR hardware that anyone can purchase and recreated the setup found in Ready Player One’s books/ movie.
Super high resolution VR Headset: Varjo XR3 Omnidirection Treadmill: Katwalk C Bhaptics Full body haptics. VRFree Gloves (stopped working really early on) Celia Scent module. Full Body Tracking vs TundraTrackers. And more
All I have to say is… I don’t think we’re quite ready for this.
Unfortunately my internet link went down in the second Q&A session at the end and the recording cut off. Shame, loads of great information came out about FPGA/ASIC implementations, AI for the VR/AR, C/C++ and a whole load of other riveting and most interesting techie stuff. But thankfully the main part of the talk was recorded.
TALK OVERVIEW This talk is about the realization of the ideas behind the Fractal Brain theory and the unifying theory of life and intelligence discussed in the last Zoom talk, in the form of useful technology. The Startup at the End of Time will be the vehicle for the development and commercialization of a new generation of artificial intelligence (AI) and machine learning (ML) algorithms.
We will show in detail how the theoretical fractal brain/genome ideas lead to a whole new way of doing AI and ML that overcomes most of the central limitations of and problems associated with existing approaches. A compelling feature of this approach is that it is based on how neurons and brains actually work, unlike existing artificial neural networks, which though making sensational headlines are impeded by severe limitations and which are based on an out of date understanding of neurons form about 70 years ago. We hope to convince you that this new approach, really is the path to true AI.
In the last Zoom talk, we discussed a great unifying of scientific ideas relating to life & brain/mind science through the application of the mathematical idea of symmetry. In turn the same symmetry approach leads to a unifying of a mass of ideas relating to computer and information science. There’s been talk in recent years of a ‘master algorithm’ of machine learning and AI. We’ll explain that it goes far deeper than that and show how there exists a way of unifying into a single algorithm, the most important fundamental algorithms in use in the world today, which relate to data compression, databases, search engines and also existing AI/ML. Furthermore and importantly this algorithm is completely fractal or scale invariant. The same algorithm which is able to perform all these functionalities is able to run on a micro-controller unit (MCU), mobile phone, laptop and workstation, going right up to a supercomputer.
The application and utility of this new technology is endless. We will discuss the road map by which the sort of theoretical ideas I’ve been discussing in the Zoom, academic and public talks over the past few years, and which I’ve written about in the Fractal Brain Theory book, will become practical technology. And how the Java/C/C++ code running my workstation and mobile phones will become products and services.