Metaverse was a huge, company destroyin, blunder. Smart move this decade is Drop Everything else and chase Agi.
VR pioneer John Carmack is leaving Meta for good. With his departure, the industry loses a visionary and an important voice.
Carmack published his farewell letter on Facebook after parts of the email were leaked to the press.
In the message to employees, Carmack, as usual, doesn’t mince words. He cites a lack of efficiency and his powerlessness to change anything about this circumstance as reasons.
A first look at the unnamed device, which will feature color passthrough mixed reality.
HTC plans to introduce a new flagship AR / VR headset next month that will reestablish its presence in the consumer virtual reality space. The company isn’t planning to release full details until CES on January 5th.
Honda is pulling away from a design practice that’s (literally) shaped auto making since the ’30s.
The $43 billion company still depends on life-size clay models to evaluate its designs, a tried and true method pioneered by GM designer Harley Earl. But Honda is gradually relying less on the practice, ever since the Coronavirus tore across the globe and resulting lockdowns divided its teams in Los Angeles, Ohio and Japan. The way Honda tells it, those early 2020 travel rules “threatened” its designers’ ability to work with engineers on the ’24 Prologue, creating a window for a deeper dive into virtual reality.
HEXATRACK-Space Express Concept Connecting Lunar &Martian City (Lunar & Mars Glass) and Beyond — SHORT VERSIONHEXATRACK-Space Express Concept, designed and created by Yosuke A. Yamashiki, Kyoto University. Lunar Glass & Mars Glass, designed and created by Takuya Ono, Kajima Co. Ltd. Visual Effect and detailed design are generated by Juniya Okamura. Concept Advisor Naoko Yamazaki, AstronautSIC Human Spaceology Center, GSAIS, Kyoto UniversityVR of Lunar&Mars Glass — created by Natsumi Iwato and Mamiko Hikita, Kyoto University. VR contents of Lunar&Mars Glass by Shinji Asano, Natsumi Iwato, Mamiko Hikita and Junya Okamura. Daidaros concept by Takuya Ono. Terraformed Mars were designed by Fuka Takagi & Yosuke A. Yamashiki. Exoplanet image were created by Ryusuke Kuroki, Fuka Takagi, Hiroaki Sato, Ayu Shiragashi and Y. A. Yamashiki. All Music (” Lunar City” “Martian”“Neptune”) are composed and played by Yosuke Alexandre Yamashiki.
Tumors are three-dimensional phenomena, but so far we have been using 2D imagery to scan and study them. With the advancement of virtual reality in recent years, professor and director at Cancer Research UK Cambridge Institute Greg Hannon saw an opportunity to advance cancer research by incorporating 3D imaging and VR technology.
In 2017, his IMAXT team (Imaging and Molecular Annotation of Xenografts and Tumors) received a £20 million grant from Cancer Grand Challenges to develop VR software that could map tumours at an unprecedented level of detail. In the last few years, the project welcomed interdisciplinary and international collaborations between scientists and artists who created and tested the technology on breast cancers.
The software, developed by Suil, will be available for researchers to use worldwide for academic, non-commercial research.
Mashable is your source for the latest in tech, culture, and entertainment.
What will your average morning look like in 2033? And who hacked us?
This scif-fi short film explores a number of near-future futurist predictions for the 2030s.
Sleep with a brain sensor sleep mask that determines when to wake you. Wake up with gentle stimulation. Drink enhanced water with nutrients, vitamins, and supplements you need. Slide on your smart glasses that you wear all day. Do yoga and stretching on a smart scale that senses you, and get tips from a virtual trainer. Help yourself wake up with a 99CRI, 500,000 lumen light. Go for a walk and your glasses scan your brain as you walk. Live neurofeedback helps you meditate. Your kitchen uses biodata to figure out the ideal health meal, and a kitchen robot makes it for you. You work in VR, AR, MR, XR, reality in the metaverse. You communicate with the world through your AI assistant and AI avatar. You enter the high tech bathroom that uses UV lights and robotics to clean your body for you. Ubers come in the form of flying cars, EVTOL aircraft, that move at 300km/h. Cities become a single color as every inch of roads and buildings become covered in photovoltaic materials.
One of the promising technologies being developed for next-generation augmented/virtual reality (AR/VR) systems is holographic image displays that use coherent light illumination to emulate the 3D optical waves representing, for example, the objects within a scene. These holographic image displays can potentially simplify the optical setup of a wearable display, leading to compact and lightweight form factors.
On the other hand, an ideal AR/VR experience requires relatively high-resolution images to be formed within a large field-of-view to match the resolution and the viewing angles of the human eye. However, the capabilities of holographic image projection systems are restricted mainly due to the limited number of independently controllable pixels in existing image projectors and spatial light modulators.
A recent study published in Science Advances reported a deep learning-designed transmissive material that can project super-resolved images using low-resolution image displays. In their paper titled “Super-resolution image display using diffractive decoders,” UCLA researchers, led by Professor Aydogan Ozcan, used deep learning to spatially-engineer transmissive diffractive layers at the wavelength scale, and created a material-based physical image decoder that achieves super-resolution image projection as the light is transmitted through its layers.
POV: There’s a slight chill in the air, the leaves are changing, and you might just be wearing corduroy. You know what that means? Facebook-turned-Meta’s Connect conference is here. And boy, was Connect 2022’s prerecorded keynote presentation video a doozy.
The hour-and-a-half long clip was jam-packed with… well, a lot. There was a product reveal. There were finally legs, but only for executive avatars. There was — and we cannot stress this enough — more nodding blankly at a camera than we’ve probably ever seen in a single video. One particularly striking revelation, though? The promise that soon, the metaverse might be a space where we can all betray those closest to us. Finally!
“Calling all crewmates,” said Meta’s Developer Relations chief Melissa Brown, as she announced that the VR version of the beloved online game “Among Us” is officially open for pre-order through the Meta Quest Store. “Soon, you’ll be betraying friends from a first-person perspective.”
A study in a virtual reality environment found that action video game players have better implicit temporal skills than non-gamers. They are better at preparing to time their reactions in tasks that require quick reactions and they do it automatically, without consciously working on it. The paper was published in Communications Biology.
Many research studies have shown that playing video games enhances cognition. These include increased ability to learn on the fly and improved control of attention. The extent of these improvements is unclear and it also depends on gameplay.
Success in action video games depends on the players’ skill in making precise responses at just the right time. Players benefit from practice during which they refine their time-related expectations of in-game developments, even when they are unaware of it. This largely unconscious process of processing time and preparing to react in a timely manner based on expectations of how the situation the person is in will develop is called incidental temporal processing.
Neuralink’s invasive brain implant vs phantom neuro’s minimally invasive muscle implant. Deep dive on brain computer interfaces, Phantom Neuro, and the future of repairing missing functions.
Connor glass. Phantom is creating a human-machine interfacing system for lifelike control of technology. We are currently hiring skilled and forward-thinking electrical, mechanical, UI, AR/VR, and Ai/ML engineers. Looking to get in touch with us? Send us an email at [email protected].
Phantom Neuro. Phantom is a neurotechnology company, spun out of the lab at The Johns Hopkins University School of Medicine, that is enabling lifelike control of robotic orthopedic technologies, such as prosthetic limbs and exoskeletons. Phantom’s solution, the Phantom X, consists of low-risk implantable sensors, AI, and enabling software. By providing superior control of robotic orthopedic mechanisms, the Phantom X will drastically improve the lives of individuals with limb difference who have yet to see a tangible improvement in quality of life despite significant advancements in the field of robotics.