Toggle light / dark theme

Neal Stephenson Named the Metaverse. Now, He’s Building It

Since I won’t be posting on Facebook that much in the future. I will leave you with this post, and also hope to see you there, as with Twitter.

Neal Stephenson invented the metaverse. At least from an imagination standpoint. Though other science fiction writers had similar ideas—and the pioneers of VR were already building artificial worlds—Stephenson’s 1992 novel Snow Crash not only fleshed out the vision of escaping to a place where digital displaced the physical, it also gave it a name. That book cemented him as a major writer, and since then he’s had huge success.


Plus: Depicting the nerd mindset; the best lettuce; and the future is flooding.

WeTac: A small, soft and ultrathin wireless electrotactile system

Virtual reality (VR) and augmented reality (AR) headsets are becoming increasingly advanced, enabling increasingly engaging and immersive digital experiences. To make VR and AR experiences even more realistic, engineers have been trying to create better systems that produce tactile and haptic feedback matching virtual content.

Researchers at University of Hong Kong, City University of Hong Kong, University of Electronic Science and Technology of China (UESTC) and other institutes in China have recently created WeTac, a miniaturized, soft and ultrathin wireless electrotactile system that produces on a user’s skin. This system, introduced in Nature Machine Intelligence, works by delivering through a user’s .

“As the tactile sensitivity among and different parts of the hand within a person varies widely, a universal method to encode tactile information into faithful feedback in hands according to sensitivity features is urgently needed,” Kuanming Yao and his colleagues wrote in their paper. “In addition, existing haptic interfaces worn on the hand are usually bulky, rigid and tethered by cables, which is a hurdle for accurately and naturally providing haptic feedback.”

LIVE — BECOMING: AN INTERACTIVE JOURNEY IN VR

Gallery QI — Becoming: An Interactive Music Journey in VR — Opening Night.
November 3rd, 2022 — Atkinson Hall auditorium.
UC San Diego — La Jolla, CA

By Shahrokh Yadegari, John Burnett, Eito Murakami and Louis Pisha.

“Becoming” is the result of a collaborative work that was initiated at the Opera Hack organized by San Diego Opera. It is an operatic VR experience based on a Persian poem by Mowlana Rumi depicting the evolution of human spirit. The audience experiences visual, auditory and tactile impressions which are partly curated and partly generated interactively in response to the player’s actions.

“Becoming” incorporates fluid and reactive graphical material which embodies the process of transformation depicted in the Rumi poem. Worlds seamlessly morph between organic and synthetic environments such as oceans, mountains and cities and are populated by continuously evolving life forms. The music is a union of classical Persian music fused with electronic music where the human voice becomes the beacon of spirit across the different stages of the evolution. The various worlds are constructed by the real-time manipulation of particle systems, flocking algorithms and terrain generation methods—all of which can be touched and influenced by the viewer. Audience members can be connected through the network and haptic feedback technology provides human interaction cues as well as an experiential stimulus.

While the piece is a major artistic endeavor, it also showcases a number of key technologies and streaming techniques for the development of musical content in XR. The spatialization system Space3D, developed at Sonic Arts and implemented to run on advanced GPUs, is capable of creating highly realistic spatial impressions, and recreating the acoustics of the environment based on the virtual models in real-time using advanced multi-processing ray-tracing techniques.

“Becoming” premiered at SIGGRAPH 2022, Immersive Pavilion in Vancouver, Canada in August 2022. Opera America and San Diego Opera showcased an early preview of this work as part of the first Opera Hack presentations in 2020.

‘Humans Will Live In Metaverse Soon’, Claims Mark Zuckerberg. What About Reality?

It mentions that Mark believes that people will migrate to the Metaverse and leave reality behind.


Here’s my conversation with Mark Zuckerberg, including a few opening words from me on Ukraine, Putin, and war. We talk about censorship, freedom, mental health, Social Dilemma, Instagram whistleblower, mortality, meaning & the future of the Metaverse & AI. https://www.youtube.com/watch?v=5zOHSysMmH0 pic.twitter.com/BLARIpXgL0— Lex Fridman (@lexfridman) February 26, 2022

SEE ALSO: Meta Shuts Down Its Students-only College Social Network ‘Campus’

Reports suggest that Meta intends to spend the next five to ten years creating an immersive virtual environment that includes fragrance, touch, and sound to allow users to lose themselves in virtual reality. The wealthy creator unveiled an AI system that can be used to construct totally personalized worlds to your own design in a recent update on future plans, and he’s working on higher-fidelity avatars and spatial audio to make communication simpler.

Latest Computer Vision Research From Cornell and Adobe Proposes An Artificial Intelligence (AI) Method To Transfer The Artistic Features Of An Arbitrary Style Image To A 3D Scene

Art is a fascinating yet extremely complex discipline. Indeed, the creation of artistic images is often not only a time-consuming problem but also requires a significant amount of expertise. If this problem holds for 2D artworks, imagine extending it to dimensions beyond the image plane, such as time (in animated content) or 3D space (with sculptures or virtual environments). This introduces new constraints and challenges, which are addressed by this paper.

Previous results involving 2D stylization focus on video contents split frame by frame. The result is that the generated individual frames achieve high-quality stylization but often lead to flickering artifacts in the generated video. This is due to the lack of temporal coherence of the produced frames. Furthermore, they do not investigate the 3D environment, which would increase the complexity of the task. Other works focusing on 3D stylization suffer from geometrically inaccurate reconstructions of point cloud or triangle meshes and the lack of style details. The reason lies in the different geometrical properties of starting mesh and produced mesh, as the style is applied after a linear transformation.

The proposed method termed Artistic Radiance Fields (ARF), can transfer the artistic features from a single 2D image to a real-world 3D scene, leading to artistic novel view renderings that are faithful to the input style image (Fig. 1).

Meta what?

When in 2015, Eileen Brown looked at the ETER9 Project (crazy for many, visionary for few) and wrote an interesting article for ZDNET with the title “New social network ETER9 brings AI to your interactions”, it ensured a worldwide projection of something the world was not expecting.

Someone, in a lost world (outside the United States), was risking, with everything he had in his possession (very little or less than nothing), a vision worthy of the American dream. At that time, Facebook was already beginning to annoy the cleaner minds that were looking for a difference and a more innovative world.

Today, after that test bench, we see that Facebook (Meta or whatever) is nothing but an illusion, or, I dare say, a big disappointment. No, no, no! I am not now bad-mouthing Facebook just because I have a project in hand that is seen as a potential competitor.

I was even a big fan of the “original” Facebook; but then I realized, it took me a few years, that Mark Zuckerberg is nothing more than a simple kid, now a man, who against everything and everyone, gave in to whims. Of him, initially, and now, perforce, of what his big investors, deluded by himself, of what his “metaverse” would be.

A high-resolution, wearable electrotactile rendering device that virtualizes the sense of touch

A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate.

The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.

“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr. Yang Zhengbao, Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study.

Watch Google’s Ping-Pong robot pull off a 340-hit rally

As if it weren’t enough to have AI tanning humanity’s hide (figuratively for now) at every board game in existence, Google AI has got one working to destroy us all at Ping-Pong as well. For now they emphasize it is “cooperative,” but at the rate these things improve, it will be taking on pros in no time.

The project, called i-Sim2Real, isn’t just about Ping-Pong but rather about building a robotic system that can work with and around fast-paced and relatively unpredictable human behavior. Ping-Pong, AKA table tennis, has the advantage of being pretty tightly constrained (as opposed to playing basketball or cricket) and a balance of complexity and simplicity.

“Sim2Real” is a way of describing an AI creation process in which a machine learning model is taught what to do in a virtual environment or simulation, then applies that knowledge in the real world. It’s necessary when it could take years of trial and error to arrive at a working model — doing it in a sim allows years of real-time training to happen in a few minutes or hours.

Apple’s mixed-reality headset: Here’s what you need to know

Sleek, light, high-performance, and not easy on the pocket like any other Apple device.

Earlier this week, Meta rolled out its Quest Pro Virtual Reality (VR) headset, priced at $1,499. Many questioned the need for a high-end VR headset when the company’s Quest 2 headset appears to be doing rather well. However, as Mark Zuckerberg mentioned in his conversation with The Verge.


The official launch of Apple’s mixed reality headset was expected to happen in 2022. In the recent past, we have had Apple products being announced much earlier than their actual availability, so a 2022 launch could still be possible. To prepare you for such an event, here’s what you need to know about the Apple headset.

What is mixed reality?

Before we delve into the details of the device, here is a short explainer of why the Apple device is not a regular VR headset. The purpose of VR is to deliver a completely immersive experience. To do so, headsets cut the user’s focus off their surroundings by dimming the peripheral vision and providing audio, visual, and often tactile feedback to make the experience more relatable.

Stable Diffusion VR is a startling vision of the future of gaming

A while ago I spotted someone working on real time AI image generation in VR and I had to bring it to your attention because frankly, I cannot express how majestic it is to watch AI-modulated AR shifting the world before us into glorious, emergent dreamscapes.

Applying AI to augmented or virtual reality isn’t a novel concept, but there have been certain limitations in applying it—computing power being one of the major barriers to its practical usage. Stable Diffusion image generation software, however, is a boiled-down algorithm for use on consumer-level hardware and has been released on a Creative ML OpenRAIL-M licence. That means not only can developers use the tech to create and launch programs without renting huge amounts of server silicon, but they’re also free to profit from their creations.

/* */