Toggle light / dark theme

Imagine a place where you could always stay young, name a city after yourself, or even become the president — sounds like a dream? Well, if not in the real world, such dreams can definitely be fulfilled in the virtual world of a metaverse. The metaverse is believed by some to be the future of the internet, where apart from surfing, people would also be able to enter inside the digital world of the internet, in the form of their avatars.

The advent of AR, blockchain, and VR devices in the last few years has sparked the development of the metaverse. Moreover, the unprecedented growth of highly advanced technologies in the gaming industry, which offer immersive gameplay experiences, not only provides us a glimpse of how the metaverse would look like but also indicates that we are closer than ever to experience a virtual world of our own.

But first, Facebook is going to have to bridge the territory of privacy — not just for those who might have photos taken of them, but for the wearers of these microphone and camera-equipped glasses. VR headsets are one thing (and they come off your face after a session). Glasses you wear around every day are the start of Facebook’s much larger ambition to be an always-connected maker of wearables, and that’s a lot harder for most people to get comfortable with.

Walking down my quiet suburban street, I’m looking up at the sky. Recording the sky. Around my ears, I hear ABBA’s new song, I Still Have Faith In You. It’s a melancholic end to the summer. I’m taking my new Ray Ban smart glasses for a walk.

The Ray-Ban Stories feel like a conservative start. They lack some features that have been in similar products already. The glasses, which act as earbud-free headphones, don’t have 3D spatial audio like the Bose Frames and Apple’s AirPods Pro do. The stereo cameras, on either side of the lenses, don’t work with AR effects, either. Facebook has a few sort-of-AR tricks in a brand-new companion app called View that pairs with these glasses on your phone, but they’re mostly ways of using depth data for a few quick social effects.

Traditional networks are unable to keep up with the demands of modern computing, such as cutting-edge computation and bandwidth-demanding services like video analytics and cybersecurity. In recent years, there has been a major shift in the focus of network research towards software-defined networks (SDN) and network function virtualization (NFV), two concepts that could overcome the limitations of traditional networking. SDN is an approach to network architecture that allows the network to be controlled using software applications, whereas NFV seeks to move functions like firewalls and encryption to virtual servers. SDN and NFV can help enterprises perform more efficiently and reduce costs. Needless to say, a combination of the two would be far more powerful than either one alone.

In a recent study published in IEEE Transactions on Cloud Computing, researchers from Korea now propose such a combined SDN/NFV network architecture that seeks to introduce additional computational functions to existing network functions. “We expect our SDN/NFV-based infrastructure to be considered for the future 6G network. Once 6G is commercialized, the resource management technique of the network and computing core can be applied to AR/VR or holographic services,” says Prof. Jeongho Kwak of Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, who was an integral part of the study.

The new network architecture aims to create a holistic framework that can fine-tune processing resources that use different (heterogeneous) processors for different tasks and optimize networking. The unified framework will support dynamic service chaining, which allows a single network connection to be used for many connected services like firewalls and intrusion protection; and code offloading, which involves shifting intensive computational tasks to a resource-rich remote server.

It’s no secret that AI is everywhere, yet it’s not always clear when we’re interacting with it, let alone which specific techniques are at play. But one subset is easy to recognize: If the experience is intelligent and involves photos or videos, or is visual in any way, computer vision is likely working behind the scenes.

Computer vision is a subfield of AI, specifically of machine learning. If AI allows machines to “think,” then computer vision is what allows them to “see.” More technically, it enables machines to recognize, make sense of, and respond to visual information like photos, videos, and other visual inputs.

Over the last few years, computer vision has become a major driver of AI. The technique is used widely in industries like manufacturing, ecommerce, agriculture, automotive, and medicine, to name a few. It powers everything from interactive Snapchat lenses to sports broadcasts, AR-powered shopping, medical analysis, and autonomous driving capabilities. And by 2,022 the global market for the subfield is projected to reach $48.6 billion annually, up from just $6.6 billion in 2015.

We speak to AR experts about the future of the Oculus Quest.


Oculus is getting into AR, and it has big repercussions for the future direction of the company and its popular line of VR headsets – especially the eventual Oculus Quest 3.

The Facebook-owned company recently announced its intention to open up its Oculus platform to augmented reality developers, allowing them to use the Oculus Quest 2 headset to host AR games and apps rather than simply VR titles – setting the scene for an explosion of both consumer and business applications on the popular standalone headset.