Toggle light / dark theme

From attending a meeting to enjoying a live performance or, perhaps, taking a class at the University of Tokyo’s Metaverse School of Engineering, the application of virtual reality is expanding in our daily lives. Earlier this year, virtual reality technologies garnered attention as tech giants, including Meta and Apple, unveiled new VR/AR (virtual reality/augmented reality) headsets. We spoke with VR and AR specialist Takuji Narumi, an associate professor at the Graduate School of Information Science and Technology, to learn about his latest research and what VR’s future has to offer.

At the Avatar Robot Café DAWN ver. β, employees serve customers via a digital screen and engage in conversation using avatars of their choice, such as an alpaca and a man with blue hair.

People will laugh and dismiss it and make comparisons to googles clown glasses. But around 2030 Augmented Reality glasses will come out. Basically, it will be a pair of normal looking sunglasses w/ smart phone type features, Ai, AND… VR stuff.


Meta chief Mark Zuckerberg on Wednesday said the tech giant is putting artificial intelligence into digital assistants and smart glasses as it seeks to gain lost ground in the AI race.

Zuckerberg made his announcements at the Connect developers conference at Meta’s headquarters in Silicon Valley, the company’s main annual product event.

“Advances in AI allow us to create different (applications) and personas that help us accomplish different things,” Zuckerberg said as he kicked off the gathering.

Augmented reality (AR) technology has long fascinated both the scientific community and the general public, remaining a staple of modern science fiction for decades.

In the pursuit of advanced AR assistants—ones that can guide people through intricate surgeries or everyday food preparation, for example—a research team from NYU Tandon School of Engineering has introduced Augmented Reality Guidance and User-Modeling System, or ARGUS.

An interactive visual analytics tool, ARGUS is engineered to support the development of intelligent AR assistants that can run on devices like Microsoft HoloLens 2 or MagicLeap. It enables developers to collect and analyze data, model how people perform tasks, and find and fix problems in the AR assistants they are building.

Pediatric specialists at Lucile Packard Children’s Hospital Stanford are implementing innovative uses for immersive virtual reality (VR) and augmented reality (AR) technologies to advance patient care and improve the patient experience.

Through the hospital’s CHARIOT program, Packard Children’s is one of the only hospitals in the world to have VR available on every unit to help engage and distract patients undergoing a range of hospital procedures. Within the Betty Irene Moore Children’s Heart Center, three unique VR projects are influencing medical education for congenital heart defects, preparing patients for procedures and aiding surgeons in the operating room. And for patients and providers looking to learn more about some of the therapies offered within our Fetal and Pregnancy Health Program, a new VR simulation helps them understand the treatments at a much closer level.

Fascinating… when can we expect this to be invented?


A short film set in the near future, where augmented reality has become so ubiquitous that the line between the real and virtual worlds have become blurred. When a new, dangerous technology is created that can manipulate the perception of this brave new world, who will exploit it? Who will monetize it? Who will become twisted by it?

“Augmented” by Ross Peacock.

The Department of Defense has teamed up with Google to build an AI-powered microscope that can help doctors identify cancer.

The tool is called an Augmented Reality Microscope, and it will usually cost health systems between $90,000 to $100,000.

Experts believe the ARM will help support doctors in smaller labs as they battle with workforce shortages and mounting caseloads.


The pair ran the case through the special microscope, and Zafar was right. In seconds, the AI flagged the exact part of the tumor that Zafar believed was more aggressive. After the machine backed him up, Zafar said his colleague was convinced.

In recent years, there has been a growing trend in higher education to incorporate modern technologies and practices in order to improve the overall educational experience. Learning management systems, gamification, video assisted learning, virtual and augmented reality, are some examples of how technology has improved student engagement and education planning. Let’s talk about AI in education. The classroom response system allowed students to answer multiple-choice questions and engage in real-time discussions instantly.

Despite the many benefits that technology has brought to education, there are also concerns about its impact on higher education institutions. With the rise of online education and the growing availability of educational resources on the internet, many traditional universities and colleges are worried about the future of their institutions. As a result, many higher education institutions need help to keep pace with the rapid technological changes and are looking for ways to adapt and stay relevant in the digital age.

By now, you’ve probably heard about ChatGPT, the AI chatbot developed by OpenAI, that has been taking social media by storm. But what exactly is ChatGPT, and why is everyone talking about it? We asked it directly, and here is a comprehensible answer for non-tech people:

This talk is about how you can use wireless signals and fuse them with vision and other sensing modalities through AI algorithms to give humans and robots X-ray vision to see objects hidden inside boxes or behind other object.

Tara Boroushaki is a Ph.D student at MIT. Her research focuses on fusing radio frequency (RF) sensing with vision through artificial intelligence. She designs algorithms and builds systems that leverage such fusion to enable capabilities that were not feasible before in applications spanning augmented reality, virtual reality, robotics, smart homes, and smart manufacturing. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Apple reportedly has several teams working and spending millions on generative AI.

Apple is spending millions of dollars a day to build artificial intelligence tools, according to The Information.

Although Apple considers itself to be its closest competitor, given how it chose to launch Vision Pro when the clamor around AR/VR tech had died down, the scale of investments by Apple is telling of how the tech industry’s pivot to generative AI has affected the company’s outlook, especially with OpenAI’s chatbot ChatGPT taking center stage.


Apple is apparently going hard on developing AI, according to a new report that says it’s investing millions of dollars every day in multiple AI projects to rival the likes of ChatGPT.