Toggle light / dark theme

Traditional networks are unable to keep up with the demands of modern computing, such as cutting-edge computation and bandwidth-demanding services like video analytics and cybersecurity. In recent years, there has been a major shift in the focus of network research towards software-defined networks (SDN) and network function virtualization (NFV), two concepts that could overcome the limitations of traditional networking. SDN is an approach to network architecture that allows the network to be controlled using software applications, whereas NFV seeks to move functions like firewalls and encryption to virtual servers. SDN and NFV can help enterprises perform more efficiently and reduce costs. Needless to say, a combination of the two would be far more powerful than either one alone.

In a recent study published in IEEE Transactions on Cloud Computing, researchers from Korea now propose such a combined SDN/NFV network architecture that seeks to introduce additional computational functions to existing network functions. “We expect our SDN/NFV-based infrastructure to be considered for the future 6G network. Once 6G is commercialized, the resource management technique of the network and computing core can be applied to AR/VR or holographic services,” says Prof. Jeongho Kwak of Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, who was an integral part of the study.

The new network architecture aims to create a holistic framework that can fine-tune processing resources that use different (heterogeneous) processors for different tasks and optimize networking. The unified framework will support dynamic service chaining, which allows a single network connection to be used for many connected services like firewalls and intrusion protection; and code offloading, which involves shifting intensive computational tasks to a resource-rich remote server.

It’s no secret that AI is everywhere, yet it’s not always clear when we’re interacting with it, let alone which specific techniques are at play. But one subset is easy to recognize: If the experience is intelligent and involves photos or videos, or is visual in any way, computer vision is likely working behind the scenes.

Computer vision is a subfield of AI, specifically of machine learning. If AI allows machines to “think,” then computer vision is what allows them to “see.” More technically, it enables machines to recognize, make sense of, and respond to visual information like photos, videos, and other visual inputs.

Over the last few years, computer vision has become a major driver of AI. The technique is used widely in industries like manufacturing, ecommerce, agriculture, automotive, and medicine, to name a few. It powers everything from interactive Snapchat lenses to sports broadcasts, AR-powered shopping, medical analysis, and autonomous driving capabilities. And by 2,022 the global market for the subfield is projected to reach $48.6 billion annually, up from just $6.6 billion in 2015.

We speak to AR experts about the future of the Oculus Quest.


Oculus is getting into AR, and it has big repercussions for the future direction of the company and its popular line of VR headsets – especially the eventual Oculus Quest 3.

The Facebook-owned company recently announced its intention to open up its Oculus platform to augmented reality developers, allowing them to use the Oculus Quest 2 headset to host AR games and apps rather than simply VR titles – setting the scene for an explosion of both consumer and business applications on the popular standalone headset.

Lambda, an AI infrastructure company, this week announced it raised $15 million in a venture funding round from 1517, Gradient Ventures, Razer, Bloomberg Beta, Georges Harik, and others, plus a $9.5 million debt facility. The $24.5 million investment brings the company’s total raised to $28.5 million, following an earlier $4 million seed tranche.

In 2013, San Francisco, California-based Lambda controversially launched a facial recognition API for developers working on apps for Google Glass, Google’s ill-fated heads-up augmented reality display. The API — which soon expanded to other platforms — enabled apps to do things like “remember this face” and “find your friends in a crowd,” Lambda CEO Stephen Balaban told TechCrunch at the time. The API has been used by thousands of developers and was, at least at one point, seeing over 5 million API calls per month.

Since then, however, Lambda has pivoted to selling hardware systems designed for AI, machine learning, and deep learning applications. Among these are the TensorBook, a laptop with a dedicated GPU, and a workstation product with up to four desktop-class GPUs for AI training. Lambda also offers servers, including one designed to be shared between teams and a server cluster, called Echelon, that Balaban describes as “datacenter-scale.”

This video was made possible by NordPass. Sign up with this link and get 70% off your premium subscription + 1 monrth for free! https://nordpass.com/futurology.

Visit Our Parent Company EarthOne For Sustainable Living Made Simple ➤
https://earthone.io/

The story of humanity is progress, from the origins of humanity with slow disjointed progress to the agricultural revolution with linear progress and furthermore to the industrial revolution with exponential almost unfathomable progress.

This accelerating rate of change of progress is due to the compounding effect of technology, in which it enables countless more from 3D printing, autonomous vehicles, blockchain, batteries, remote surgeries, virtual and augmented reality, robotics – the list can go on and on. These devices in turn will lead to mass changes in society from energy generation, monetary systems, space colonization, automation and much more!