Toggle light / dark theme

Carmack, known for his work in VR and on classic games like Doom and Quake, is stepping down from his consulting CTO role at Meta.

John Carmack, a titan of the technology industry known for his work on virtual reality as well as classic games like Doom.

Carmack originally joined Oculus as CTO in 2013, after helping to promote the original Oculus Rift prototypes that he received from Palmer Luckey, and got pulled into Meta when the company (then Facebook) acquired Oculus in 2014.


“We built something pretty close to the right thing,” Carmack wrote about the Quest 2. He also said that he “wearied of the fight” with Meta, which is burning billions in its Reality Labs division to build things like VR headsets and software for its vision of the metaverse. Carmack would also write internal posts criticizing CEO Mark Zuckerberg and CTO Andrew Bosworth’s decision making while at Meta, The New York Times reported.

Metaverse was a huge, company destroyin, blunder. Smart move this decade is Drop Everything else and chase Agi.


VR pioneer John Carmack is leaving Meta for good. With his departure, the industry loses a visionary and an important voice.

Carmack published his farewell letter on Facebook after parts of the email were leaked to the press.

In the message to employees, Carmack, as usual, doesn’t mince words. He cites a lack of efficiency and his powerlessness to change anything about this circumstance as reasons.

Honda is pulling away from a design practice that’s (literally) shaped auto making since the ’30s.

The $43 billion company still depends on life-size clay models to evaluate its designs, a tried and true method pioneered by GM designer Harley Earl. But Honda is gradually relying less on the practice, ever since the Coronavirus tore across the globe and resulting lockdowns divided its teams in Los Angeles, Ohio and Japan. The way Honda tells it, those early 2020 travel rules “threatened” its designers’ ability to work with engineers on the ’24 Prologue, creating a window for a deeper dive into virtual reality.

HEXATRACK-Space Express Concept Connecting Lunar &Martian City (Lunar & Mars Glass) and Beyond — SHORT VERSIONHEXATRACK-Space Express Concept, designed and created by Yosuke A. Yamashiki, Kyoto University.
Lunar Glass & Mars Glass, designed and created by Takuya Ono, Kajima Co. Ltd.
Visual Effect and detailed design are generated by Juniya Okamura.
Concept Advisor Naoko Yamazaki, AstronautSIC Human Spaceology Center, GSAIS, Kyoto UniversityVR of Lunar&Mars Glass — created by Natsumi Iwato and Mamiko Hikita, Kyoto University.
VR contents of Lunar&Mars Glass by Shinji Asano, Natsumi Iwato, Mamiko Hikita and Junya Okamura.
Daidaros concept by Takuya Ono.
Terraformed Mars were designed by Fuka Takagi & Yosuke A. Yamashiki.
Exoplanet image were created by Ryusuke Kuroki, Fuka Takagi, Hiroaki Sato, Ayu Shiragashi and Y. A. Yamashiki.
All Music (” Lunar City” “Martian”“Neptune”) are composed and played by Yosuke Alexandre Yamashiki.

Tumors are three-dimensional phenomena, but so far we have been using 2D imagery to scan and study them. With the advancement of virtual reality in recent years, professor and director at Cancer Research UK Cambridge Institute Greg Hannon saw an opportunity to advance cancer research by incorporating 3D imaging and VR technology.

In 2017, his IMAXT team (Imaging and Molecular Annotation of Xenografts and Tumors) received a £20 million grant from Cancer Grand Challenges to develop VR software that could map tumours at an unprecedented level of detail. In the last few years, the project welcomed interdisciplinary and international collaborations between scientists and artists who created and tested the technology on breast cancers.

The software, developed by Suil, will be available for researchers to use worldwide for academic, non-commercial research.

Mashable is your source for the latest in tech, culture, and entertainment.

Good Morning, 2033 — A Sci-Fi Short Film.

What will your average morning look like in 2033? And who hacked us?

This scif-fi short film explores a number of near-future futurist predictions for the 2030s.

Sleep with a brain sensor sleep mask that determines when to wake you. Wake up with gentle stimulation. Drink enhanced water with nutrients, vitamins, and supplements you need. Slide on your smart glasses that you wear all day. Do yoga and stretching on a smart scale that senses you, and get tips from a virtual trainer. Help yourself wake up with a 99CRI, 500,000 lumen light. Go for a walk and your glasses scan your brain as you walk. Live neurofeedback helps you meditate. Your kitchen uses biodata to figure out the ideal health meal, and a kitchen robot makes it for you. You work in VR, AR, MR, XR, reality in the metaverse. You communicate with the world through your AI assistant and AI avatar. You enter the high tech bathroom that uses UV lights and robotics to clean your body for you. Ubers come in the form of flying cars, EVTOL aircraft, that move at 300km/h. Cities become a single color as every inch of roads and buildings become covered in photovoltaic materials.

One of the promising technologies being developed for next-generation augmented/virtual reality (AR/VR) systems is holographic image displays that use coherent light illumination to emulate the 3D optical waves representing, for example, the objects within a scene. These holographic image displays can potentially simplify the optical setup of a wearable display, leading to compact and lightweight form factors.

On the other hand, an ideal AR/VR experience requires relatively to be formed within a large field-of-view to match the resolution and the viewing angles of the human eye. However, the capabilities of holographic image projection systems are restricted mainly due to the limited number of independently controllable pixels in existing image projectors and spatial light modulators.

A recent study published in Science Advances reported a deep learning-designed transmissive material that can project super-resolved images using low-resolution image displays. In their paper titled “Super-resolution image display using diffractive decoders,” UCLA researchers, led by Professor Aydogan Ozcan, used deep learning to spatially-engineer transmissive diffractive layers at the wavelength scale, and created a material-based physical image decoder that achieves super-resolution image projection as the light is transmitted through its layers.

POV: There’s a slight chill in the air, the leaves are changing, and you might just be wearing corduroy. You know what that means? Facebook-turned-Meta’s Connect conference is here. And boy, was Connect 2022’s prerecorded keynote presentation video a doozy.

The hour-and-a-half long clip was jam-packed with… well, a lot. There was a product reveal. There were finally legs, but only for executive avatars. There was — and we cannot stress this enough — more nodding blankly at a camera than we’ve probably ever seen in a single video. One particularly striking revelation, though? The promise that soon, the metaverse might be a space where we can all betray those closest to us. Finally!

“Calling all crewmates,” said Meta’s Developer Relations chief Melissa Brown, as she announced that the VR version of the beloved online game “Among Us” is officially open for pre-order through the Meta Quest Store. “Soon, you’ll be betraying friends from a first-person perspective.”