Toggle light / dark theme

Believe it or not, one of the most important technology announcements of the past few months had nothing to do with artificial intelligence. While critics and boosters continue to stir and fret over the latest capabilities of ChatGPT, a largely unknown 60-person start-up, based out of Tel Aviv, quietly began demoing a product that might foretell an equally impactful economic disruption.

The company is named Sightful and their new offering is Spacetop: “the world’s first augmented reality laptop.” Spacetop consists of a standard computer keyboard tethered to pair of goggles, styled like an unusually chunky pair of sport sunglasses. When you put on the goggles, the Spacetop technology inserts multiple large virtual computer screens into your visual field, floating above the keyboard as if you were using a computer connected to large external monitors.

As oppose to virtual reality technology, which places you into an entirely artificial setting, Spacetop is an example of augmented reality (AR), which places virtual elements into the real world. The goggles are transparent: when you put them on at your table in Starbucks you still see the coffee shop all around you. The difference is now there are also virtual computer screens floating above your macchiato.

Self-described mixed reality nerd, Brad Lynch, has tweeted out several interesting details about Apple’s yet-to-be announced VR/AR headset. He has managed to compile information from several sources — mostly reports produced by hardware analysts based in China. His summation of the leaked info states: “The Apple HMD’s Bill of Materials (BoM) cost to be about $1500–1600 (USD). This is about double the reported BoM for the (Meta) Quest Pro (which was 800 dollars including the controllers and charging pad).”

Artificial intelligence (AI) and the metaverse are some of the most captivating technologies of the 21st century so far. Both are believed to have the potential to change many aspects of our lives, disrupt different industries, and enhance the efficiency of traditional workflows. While these two technologies are often looked at separately, they’re more connected than we may think. Before we explore the relationship between AI and the metaverse, let’s start by defining both terms.

The metaverse is a concept describing a hypothetical future design of the internet. It features an immersive, 3D online world where users are represented by custom avatars and access information with the help of virtual reality (VR), augmented reality (AR), and similar technologies. Instead of accessing the internet via their screens, users access the metaverse via a combination of the physical and digital. The metaverse will enable people to socialize, play, and work alongside others in different 3D virtual spaces.

A similar arrangement was described in Neal Stephenson’s 1992 science-fiction novel Snow Crash. While it was perceived as a fantasy mere three decades ago, it seems like it could become a reality sooner rather than later. Although the metaverse isn’t fully in existence yet, some online platforms incorporate elements of it. For example, video games like Fortnite and Horizon World port multiple elements of our day-to-day lives into the online world.

The demo is clever, questionably real, and prompts a lot of questions about how this device will actually work.

Buzz has been building around the secretive tech startup Humane for over a year, and now the company is finally offering a look at what it’s been building. At TED last month, Humane co-founder Imran Chaudhri gave a demonstration of the AI-powered wearable the company is building as a replacement for smartphones. Bits of the video leaked online after the event, but the full video is now available to watch.

The device appears to be a small black puck that slips into your breast pocket, with a camera, projector, and speaker sticking out the top. Throughout the 13-minute presentation, Chaudhri walks through a handful of use cases for Humane’s gadget: * The device rings when Chaudhri receives a phone call. He holds his hand up, and the device projects the caller’s name along with icons to answer or ignore the call. He then has a brief conversation. (Around 1:48 in the video) * He presses and holds one finger on the device, then asks a question about where he can buy a gift. The device responds with the name of a shopping district. (Around 6:20) * He taps two fingers on the device, says a sentence, and the device translates the sentence into another language, stating it back using an AI-generated clone of his voice. (Around 6:55) * He presses and holds one finger on the device, says, “Catch me up,” and it reads out a summary of recent emails, calendar events, and messages. (At 9:45) * He holds a chocolate bar in front of the device, then presses and holds one finger on the device while asking, “Can I eat this?” The device recommends he does not because of a food allergy he has. He presses down one finger again and tells the device he’s ignoring its advice. (Around 10:55)

Chaudhri, who previously worked on design at Apple for more than two decades, pitched the device as a salve for a world covered in screens. “Some believe AR / VR glasses like these are the answer,” he said, an image of VR headsets behind him. He argued those devices — like smartphones — put “a further barrier between you and the world.”

Humane’s device, whatever it’s called, is designed to be more natural by eschewing the screen. The gadget operates on its own. “You don’t need a smartphone or any other device to pair with it,” he said.

The world’s first flexible, transparent augmented reality (AR) display screen using 3D printing and low-cost materials has been created by researchers at the University of Melbourne, KDH Design Corporation and the Melbourne Centre for Nanofabrication (MCN). The development of the new display screen is set to advance how AR is used across a wide range of industries and applications.

AR overlays digital content onto the , enhancing the user’s real-time perception and interaction with their environment. Until now, creating flexible AR technology that can adjust to different angles of light sources has been a challenge, as current mainstream AR manufacturing uses glass substrates, which must undergo photomasking, lamination, cutting, or etching microstructure patterns. These time-consuming processes are expensive, have a poor yield rate and are difficult to seamlessly integrate with product appearance designs.

Led by University of Melbourne researchers Associate Professor Ranjith Unnithan, Professor Christina Lim and Professor Thas Nirmalathas, in collaboration with Taiwanese KDH Design Corporation, the team has successfully developed a transparent AR display screen using low-cost, optical-quality polymer and plastic—a first-of-its-kind achievement in the field of AR displays.

Perhaps your real life is so rich you don’t have time for another.

Even so, the US Department of Defense (DOD) may already be creating a copy of you in an alternate reality to see how long you can go without food or water, or how you will respond to televised propaganda.

The DOD is developing a parallel to Planet Earth, with billions of individual “nodes” to reflect every man, woman, and child this side of the dividing line between reality and AR.

In a new study in Nature Machine Intelligence, researchers Bojian Yin and Sander Bohté from the HBP partner Dutch National Research Institute for Mathematics and Computer Science (CWI) demonstrate a significant step towards artificial intelligence that can be used in local devices like smartphones and in VR-like applications, while protecting privacy.

They show how brain-like neurons combined with novel learning methods enable training fast and energy-efficient spiking on a large scale. Potential applications range from wearable AI to and Augmented Reality.

While modern artificial neural networks are the backbone of the current AI revolution, they are only loosely inspired by networks of real, biological neurons such as our brain. The brain however is a much larger network, much more energy-efficient, and can respond ultra-fast when triggered by external events. Spiking neural networks are special types of neural networks that more closely mimic the working of biological neurons: the neurons of our nervous system communicate by exchanging electrical pulses, and they do so only sparingly.

Neural networks are distributed computing structures inspired by the structure of a biological brain and aim to achieve cognitive performance comparable to that of humans but in a much shorter time.

These technologies now form the basis of machine learning and that can perceive the environment and adapt their own behavior by analyzing the effects of previous actions and working autonomously. They are used in many areas of application, such as speech and image recognition and synthesis, autonomous driving and augmented reality systems, bioinformatics, genetic and molecular sequencing, and high-performance computing technologies.

Compared to conventional computing approaches, in order to perform complex functions, neural networks need to be initially “trained” with a large amount of known information that the network then uses to adapt by learning from experience. Training is an extremely energy-intensive process and as computing power increases, the neural networks’ consumption grows very rapidly, doubling every six months or so.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

In January, new reports on Apple’s long-awaited augmented reality/virtual reality headset were released. And if what’s in these reports is even partially true, Apple is poised to give the world one of the most jaw-dropping, powerful pieces of technology in history (again) — which is why it was a bit surprising that this news didn’t make more of a splash.

This is the same company that has fans enter lotteries for tickets to corporate keynote addresses! Yet, outside of the usual tech blogs and a few newspaper columns, the future of Apple’s AR/VR device went largely unnoticed.