Toggle light / dark theme

Alright, Apple. Pack it up, show’s over. Mark Zuckerberg just dropped a surprise announcement, unveiling the Meta Quest 3 virtual reality headset. The reveal comes just *checks calendar* four days before Apple is set to officially announce its high-end mixed reality headset. So much for Apple’s headset hype, I guess.

All kidding aside, Meta bought its way into virtual reality years ago through Oculus when Zuck’s company was still trading under the name Facebook. Meta certainly has more experience selling VR headsets and throwing VR parties than Apple – for now, at least.

Like the Meta Quest 2, the newly unveiled Meta Quest 3 is also in another universe when it comes to pricing. Apple’s “Reality Pro” headset is expected to carry a price tag of around $3,000. Meta’s higher-end Quest Pro goes for $999, and the new Meta Quest 3 is priced from $499.

Missed the GamesBeat Summit excitement? Don’t worry! Tune in now to catch all of the live and virtual sessions here.

3DFY.ai announced the launch of 3DFY Prompt, a generative AI that lets developers and creators build 3D models based on text prompts.

The Tel Aviv, Israel-based company said the tech democratizes professional-quality 3D model creation, enabling anyone to use text prompts to create high-quality models that can be used in gaming, design or virtual environments.

But they’re not the only ones. Multiple companies are working on haptic devices, like gloves or vests, to add a sense of touch to virtual experiences. And now, researchers are aiming to integrate a fourth sense: smell.

How much more real might that peaceful meadow feel if you could smell the wildflowers and the damp Earth around you? How might the scent of an ocean breeze amplify a VR experience that takes place on a boat or a beach?

Scents have a powerful effect on the brain, eliciting emotions, memories, and sometimes even fight-or-flight responses. You may feel nostalgic with the cologne or perfume a favorite grandparent wore, comforted by a whiff of a favorite food, or extra-alert to your surroundings if it smells like something’s burning.

They could help lead to speech apps for many more languages than exist now.

Meta has built AI models that can recognize and produce speech for more than 1,000 languages—a tenfold increase on what’s currently available. It’s a significant step toward preserving languages that are at risk of disappearing, the company says.

Meta is releasing its models to the public via the code hosting service GitHub. It claims that making them open source will help developers working in different languages to build new speech applications—like messaging services that understand everyone, or virtual-reality systems that can be used in any language.

Apple Inc.’s annual WWDC developers’ conference is fast approaching — and this one promises to be a bit more eventful than most, as the consumer-electronics giant is expected to debut its long-awaited mixed-reality headset.

The company hasn’t made a major product introduction since it rolled out the Apple Watch in 2015, but with an expensive headset supporting augmented and virtual reality, Apple AAPL,-0.88% will be entering into a market that has so far failed to catch on in a mainstream way. Meta Platforms Inc. META, +0.85% has bet big on virtual reality with its Oculus headset, with only limited traction.

Artificial intelligence (AI) and the metaverse are some of the most captivating technologies of the 21st century so far. Both are believed to have the potential to change many aspects of our lives, disrupt different industries, and enhance the efficiency of traditional workflows. While these two technologies are often looked at separately, they’re more connected than we may think. Before we explore the relationship between AI and the metaverse, let’s start by defining both terms.

The metaverse is a concept describing a hypothetical future design of the internet. It features an immersive, 3D online world where users are represented by custom avatars and access information with the help of virtual reality (VR), augmented reality (AR), and similar technologies. Instead of accessing the internet via their screens, users access the metaverse via a combination of the physical and digital. The metaverse will enable people to socialize, play, and work alongside others in different 3D virtual spaces.

A similar arrangement was described in Neal Stephenson’s 1992 science-fiction novel Snow Crash. While it was perceived as a fantasy mere three decades ago, it seems like it could become a reality sooner rather than later. Although the metaverse isn’t fully in existence yet, some online platforms incorporate elements of it. For example, video games like Fortnite and Horizon World port multiple elements of our day-to-day lives into the online world.

The demo is clever, questionably real, and prompts a lot of questions about how this device will actually work.

Buzz has been building around the secretive tech startup Humane for over a year, and now the company is finally offering a look at what it’s been building. At TED last month, Humane co-founder Imran Chaudhri gave a demonstration of the AI-powered wearable the company is building as a replacement for smartphones. Bits of the video leaked online after the event, but the full video is now available to watch.

The device appears to be a small black puck that slips into your breast pocket, with a camera, projector, and speaker sticking out the top. Throughout the 13-minute presentation, Chaudhri walks through a handful of use cases for Humane’s gadget: * The device rings when Chaudhri receives a phone call. He holds his hand up, and the device projects the caller’s name along with icons to answer or ignore the call. He then has a brief conversation. (Around 1:48 in the video) * He presses and holds one finger on the device, then asks a question about where he can buy a gift. The device responds with the name of a shopping district. (Around 6:20) * He taps two fingers on the device, says a sentence, and the device translates the sentence into another language, stating it back using an AI-generated clone of his voice. (Around 6:55) * He presses and holds one finger on the device, says, “Catch me up,” and it reads out a summary of recent emails, calendar events, and messages. (At 9:45) * He holds a chocolate bar in front of the device, then presses and holds one finger on the device while asking, “Can I eat this?” The device recommends he does not because of a food allergy he has. He presses down one finger again and tells the device he’s ignoring its advice. (Around 10:55)

Chaudhri, who previously worked on design at Apple for more than two decades, pitched the device as a salve for a world covered in screens. “Some believe AR / VR glasses like these are the answer,” he said, an image of VR headsets behind him. He argued those devices — like smartphones — put “a further barrier between you and the world.”

Humane’s device, whatever it’s called, is designed to be more natural by eschewing the screen. The gadget operates on its own. “You don’t need a smartphone or any other device to pair with it,” he said.

From 2021

A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.

YouTube.


PhD student from MIT’s Department of Electrical Engineering and Computer Science Liang Shi said people once believed that with the existing consumer-grade computer hardware, it’s impossible to do real-time 3D holograms. It’s going to take decades before it would be viable, but he figured out a way to cut that time.

Holographic image is not just made up in the movies, it’s being created as we speak. Shi’s team took a different approach from what was first developed in holograms, where laser beam would split, half the beam is used to illuminate the subject and the other half used as a reference for the light waves phase.

Cornell University researchers have developed a robot, called ReMotion, that occupies physical space on a remote user’s behalf, automatically mirroring the user’s movements in real time and conveying key body language that is lost in standard virtual environments.

“Pointing gestures, the perception of another’s gaze, intuitively knowing where someone’s attention is—in remote settings, we lose these nonverbal, implicit cues that are very important for carrying out design activities,” said Mose Sakashita, a doctoral student of information science.

Sakashita is the lead author of “ReMotion: Supporting Remote Collaboration in Open Space with Automatic Robotic Embodiment,” which he presented at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems in Hamburg, Germany. “With ReMotion, we show that we can enable rapid, dynamic interactions through the help of a mobile, automated robot.”