It mentions that Mark believes that people will migrate to the Metaverse and leave reality behind.
Here’s my conversation with Mark Zuckerberg, including a few opening words from me on Ukraine, Putin, and war. We talk about censorship, freedom, mental health, Social Dilemma, Instagram whistleblower, mortality, meaning & the future of the Metaverse & AI. https://www.youtube.com/watch?v=5zOHSysMmH0pic.twitter.com/BLARIpXgL0— Lex Fridman (@lexfridman) February 26, 2022
Art is a fascinating yet extremely complex discipline. Indeed, the creation of artistic images is often not only a time-consuming problem but also requires a significant amount of expertise. If this problem holds for 2D artworks, imagine extending it to dimensions beyond the image plane, such as time (in animated content) or 3D space (with sculptures or virtual environments). This introduces new constraints and challenges, which are addressed by this paper.
Previous results involving 2D stylization focus on video contents split frame by frame. The result is that the generated individual frames achieve high-quality stylization but often lead to flickering artifacts in the generated video. This is due to the lack of temporal coherence of the produced frames. Furthermore, they do not investigate the 3D environment, which would increase the complexity of the task. Other works focusing on 3D stylization suffer from geometrically inaccurate reconstructions of point cloud or triangle meshes and the lack of style details. The reason lies in the different geometrical properties of starting mesh and produced mesh, as the style is applied after a linear transformation.
The proposed method termed Artistic Radiance Fields (ARF), can transfer the artistic features from a single 2D image to a real-world 3D scene, leading to artistic novel view renderings that are faithful to the input style image (Fig. 1).
When in 2015, Eileen Brown looked at the ETER9 Project (crazy for many, visionary for few) and wrote an interesting article for ZDNET with the title “New social network ETER9 brings AI to your interactions”, it ensured a worldwide projection of something the world was not expecting.
Someone, in a lost world (outside the United States), was risking, with everything he had in his possession (very little or less than nothing), a vision worthy of the American dream. At that time, Facebook was already beginning to annoy the cleaner minds that were looking for a difference and a more innovative world.
Today, after that test bench, we see that Facebook (Meta or whatever) is nothing but an illusion, or, I dare say, a big disappointment. No, no, no! I am not now bad-mouthing Facebook just because I have a project in hand that is seen as a potential competitor.
A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate.
The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.
“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr. Yang Zhengbao, Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study.
As if it weren’t enough to have AI tanning humanity’s hide (figuratively for now) at every board game in existence, Google AI has got one working to destroy us all at Ping-Pong as well. For now they emphasize it is “cooperative,” but at the rate these things improve, it will be taking on pros in no time.
The project, called i-Sim2Real, isn’t just about Ping-Pong but rather about building a robotic system that can work with and around fast-paced and relatively unpredictable human behavior. Ping-Pong, AKA table tennis, has the advantage of being pretty tightly constrained (as opposed to playing basketball or cricket) and a balance of complexity and simplicity.
“Sim2Real” is a way of describing an AI creation process in which a machine learning model is taught what to do in a virtual environment or simulation, then applies that knowledge in the real world. It’s necessary when it could take years of trial and error to arrive at a working model — doing it in a sim allows years of real-time training to happen in a few minutes or hours.
Sleek, light, high-performance, and not easy on the pocket like any other Apple device.
Earlier this week, Meta rolled out its Quest Pro Virtual Reality (VR) headset, priced at $1,499. Many questioned the need for a high-end VR headset when the company’s Quest 2 headset appears to be doing rather well. However, as Mark Zuckerberg mentioned in his conversation with The Verge.
The official launch of Apple’s mixed reality headset was expected to happen in 2022. In the recent past, we have had Apple products being announced much earlier than their actual availability, so a 2022 launch could still be possible. To prepare you for such an event, here’s what you need to know about the Apple headset.
A while ago I spotted someone working on real time AI image generation in VR and I had to bring it to your attention because frankly, I cannot express how majestic it is to watch AI-modulated AR shifting the world before us into glorious, emergent dreamscapes.
Applying AI to augmented or virtual reality isn’t a novel concept, but there have been certain limitations in applying it—computing power being one of the major barriers to its practical usage. Stable Diffusion image generation software, however, is a boiled-down algorithm for use on consumer-level hardware and has been released on a Creative ML OpenRAIL-M licence. That means not only can developers use the tech to create and launch programs without renting huge amounts of server silicon, but they’re also free to profit from their creations.
In an interview published Tuesday with The Verge, Zuckerberg said VR, the technology he bet his entire $340 billion company on a year ago, is entering “the trough of disillusionment.” That’s a term folks in the tech industry like to use when excitement around a new technology drastically wanes.
His comments effectively place expectations for the success of the new Meta Quest Pro, which goes on sale Oct. 25, at next to zero. At the same time, Zuckerberg reiterated his belief that the metaverse will be the next iteration of computing after the smartphone — it’s just going to take a long time. Specifically, he told The Verge “it’s not going to be until later this decade” when metaverse gadgets like the Quest Pro will be “fully mature.”
But Meta isn’t selling headsets later this decade. It’s selling them now, and expecting technologists and software developers to invent compelling reasons to buy one.
In the past few years, a growing number of computer scientists have been exploring the idea of “metaverse,” an internet-based space where people would be able to virtually perform various everyday activities. The general idea is that, using virtual reality (VR) headsets or other technologies, people might be able to attend work meetings, meet friends, shop, attend events, or visit places, all within a 3D virtual environment.
While the metaverse has recently been the topic of much debate, accessing its 3D “virtual environments” often requires the use of expensive gear and devices, which can only be purchased by a relatively small amount of people. This unavoidably limits who might be able to access this virtual space.
Researchers at Beijing Institute of Technology and JD Explore Academy have recently created WOC, a 3D online chatroom that could be accessible to a broader range of people worldwide. To gain access to this chatroom, which was introduced in a paper pre-published on arXiv, users merely need a simple computer webcam or smartphone camera.
In this episode we explore a User Interface Theory of reality. Since the invention of the computer virtual reality theories have been gaining in popularity, often to explain some difficulties around the hard problem of consciousness (See Episode #1 with Sue Blackmore to get a full analysis of the problem of how subjective experiences might emerge out of our brain neurology); but also to explain other non-local anomalies coming out of physics and psychology, like ‘quantum entanglement’ or ‘out of body experiences’. Do check the devoted episodes #4 and #28 respectively on those two phenomena for a full breakdown. As you will hear today the vast majority of cognitive scientists believe consciousness is an emergent phenomena from matter, and that virtual reality theories are science fiction or ‘Woowoo’ and new age. One of this podcasts jobs is to look at some of these Woowoo claims and separate the wheat from the chaff, so the open minded among us can find the threshold beyond which evidence based thinking, no matter how contrary to the consensus can be considered and separated from wishful thinking. So you can imagine my joy when a hugely respected cognitive scientist and User Interface theorist, who can cut through the polemic and orthodoxy with calm, respectful, evidence based argumentation, agreed to come on the show, the one and only Donald D Hoffman.
Hoffman is a full professor of cognitive science at the University of California, Irvine, where he studies consciousness, visual perception and evolutionary psychology using mathematical models and psychophysical experiments. His research subjects include facial attractiveness, the recognition of shape, the perception of motion and colour, the evolution of perception, and the mind-body problem. So he is perfectly placed to comment on how we interpret reality.
Hoffman has received a Distinguished Scientific Award of the American Psychological Association for early career research into visual perception, the Rustum Roy Award of the Chopra Foundation, and the Troland Research Award of the US National Academy of Sciences. So his recognition in the field is clear.
He is also the author of ‘The Case Against Reality’, the content of which we’ll be focusing on today; ‘Visual Intelligence’, and the co-author with Bruce Bennett and Chetan Prakash of ‘Observer Mechanics’.