Toggle light / dark theme

Apple’s ARKit and Google’s ARCore may have multi-million-user installed bases, but so does Niantic’s Pokémon Go, the first hit augmented reality game. Now Niantic plans to offer its custom AR software to other developers as the Real World Platform, and is teasing advanced features that go beyond the capabilities of Apple’s and Google’s development kits.

Already used in Pokémon GO, Ingress, and Harry Potter: Wizards Unite, the Real World Platform is an evolving software engine that adds digital characters and shared social experiences to real-world map data. Niantic has recently bolstered the AR platform’s development team by acquiring computer vision specialist companies Escher Reality and Matrix Mill, the latter with machine learning expertise.

Matrix Mill’s machine learning will power one of the Real World Platform’s upcoming tricks — a realtime AR occlusion engine. As demonstrated, the feature enables 3D versions of Pikachu and Eevee to disappear behind real world objects and people, even as both the camera and people move. Niantic is using a machine-trained neural network to determine which objects should be considered “foreground” or “background” versus the AR characters.

Read more

Ever wish you could visit other planets in our solar system without launching on a deep-space mission? Now you can embark on an interplanetary adventure right from the palm of your hand, thanks to gorgeous, 3D-printed planet models and an augmented-reality (AR) app.

Brought to you by AstroReality, the same company that created the “Lunar” AR moon model and its new Earth counterpart, this set includes miniature models of all eight planets and one model of the dwarf planet Pluto. Each model is 1.2 inches (3 centimeters) in diameter and color-printed with a resolution of 0.1 millimeter per pixel.

Without the AR app, you can admire detailed features such as Pluto’s “heart” and Jupiter’s Great Red Spot. But the real extraterrestrial adventure begins when you open the AstroReality app (available for iOS and Android) on your mobile device and point the camera at any of the nine models. [Our Solar System: A Photo Tour of the Planets].

Read more

The famous psychologist Timothy Leary once referred to himself as a “surfer,” envisioning a future where, “[t]o study biology, you can press a button and make yourself part of the human body. You can become a white blood cell and learn about the circulatory system by traveling through an artery. You can call up the Prado Museum in Madrid and study Goya’s paintings.”


When I think about the future, I envision mass technological disruptions across the entire landscape. Artificial intelligence (AI) being embedded into the very fabric of our architecture and institutions, 3D printing transforming our socio-economic system from scarcity to abundance, and virtual reality/augmented reality (VR/AR) unleashing infinite potential in shaping our perceptions of reality.

One could argue that we’ve already been experimenting with VR/AR via the use of psychedelic drugs, like psilopsybin, DMT, etc. But for many, the perception of these drugs tend to carry an unfortnate negative connotation. When people think of someone doing shrooms, a lot of them think of a person going mad in the middle of the woods. When people think of someone doing LSD, a lot of them think of a person believing they’re Peter Pan as they hoist themselves off the top of a skyscraper.

The devil may be in the details, but for those who actually experiment with psychedelics, the devil isn’t this terrible thing which results in their immediate death or psychological disruption; the devil is the infinite potential of their mind overcoming the many obstacles of reality.

Read more

Let me propose a hypothetical future scenario: Let’s say that we’ve since developed an advanced method of brain-to-brain (B2B) communication, to which, naturally, has become quite popular among the younger generation of that time.


How might we judge futuristic societies using our present day standards? Better yet, how might the past have judged us today and would there be a difference?

Read more

Pasadena-based artificial intelligence tech startup Oben is about to roll out its first product, PAI (Personal AI), a consumer app designed to let users create an AI-driven avatar with their own look and voice.

Its underlying AI technology is already getting some select professional use. Overall, Oben’s team believes AI can have a wide range of uses including in virtual and augmented reality, gaming, content creation and retail.

With PAI, users essentially “teach” the app about themselves. “You take a selfie, and a visual avatar is ready in the app,” Oben CEO and co-founder Nikhil Jain explained, adding that users can then customize their look. Plus, simply by speaking a few sentences, users can teach their avatar to talk or sing. These features can be used on social media and the like.

Read more

DigiLens, a developer of transparent waveguide display technology, says it’s working toward a waveguide display which could bring a 150 degree field of view to AR and VR (or XR) headsets. The company expects the display will be available in 2019.

Founded in 2005, DigiLens has developed a proprietary waveguide manufacturing process which allows the company to “print” light manipulating structures (Bragg gratings) into a thin and transparent material wherein light can be guided along the optic and be made to project perpendicularly, forming an image in the user’s eye. While DigiLens isn’t the only company which makes waveguide displays, they claim that their process offers a steep cost advantage compared to competitors. The company says they’ve raised $35 million between its Series A and B investment rounds.

While DigiLens’ displays have primarily been used in HUD-like applications, the company is increasingly positioning its wares toward the growing wearable, AR, and VR industries. At AWE 2018 last week, DigiLens Founder & CTO Jonathan Waldern told me that the company expects to offer a waveguide display suitable for AR and VR headsets which could offer a 150 degree field of view between both eyes. He said that a single display could be suitable for AR and VR modes in the same headset by utilizing a liquid crystal blackout layer which can switch between transparent and opaque, something which DigiLens partner Panasonic has developed. A clip-on light blocker or other type of tinting film ought to be suitable as well.

Read more

Artificial intelligence has exploded, and perhaps no one knows it more than Harry Shum, the executive vice president in charge of Microsoft’s AI and Research Group, which has been at the center of a major technological shift inside the company.

Delivering the commencement address Friday at the University of Washington’s Paul G. Allen School of Computer Science and Engineering, Shum drew inspiration from three emerging technologies — quantum computing, AI, and mixed reality — to deliver life lessons and point out the future of technology for the class of 2018.

Read more

Virtual reality is a gateway to powerful experiences. Strap on a pair of VR goggles, look around, and the scene you see will adjust, in real time, to match your gaze. But the technology is a visual one. Virtual reality doesn’t include touch, although there are controllers that provide “hand presence,” allowing you to manipulate objects in the virtual world, or shoot a simulated gun. So while VR today could simulate a Westworld -like setting, you’re not going to be actually feeling the hug of a cowboy-robot on your body while using any of the major platforms—at least not for a while.

The Force Jacket, a garment from Disney Research, aims to address that gap. Made out of a converted life jacket, the prototype uses embedded airbags that inflate, deflate, or even vibrate to literally give its wearer a feeling of being touched. When coupled with VR software, the setup can simulate something bizarre—a snake slithering on you—or more pedestrian: getting hit by a snowball. In brief, the sensation of touch you feel on your actual body can match what you see in a virtual one. (The device is the result of a research project, so these lifejacket-garments aren’t exactly on sale on Amazon. It’s also not the first research to focus on incorporating haptics into VR.)

“If you’ve experienced virtual reality or augmented reality, it’s largely based in this immersive visual world,” says Alexandra Delazio, the lead researcher on the jacket project and currently a research engineer at the University of Pittsburgh, where she works on technology for people with disabilities. “The real world is not just visual—it’s full of force and pressure-based interaction.” The goal of the jacket is to bring that sense of touch to the virtual world, or maybe even offer a way for someone far away to give you a hug.

Read more

Four virtual reality (VR) veterans from Discovery Digital, Oculus Story Studio and Lightshed officially launched their new company out of stealth mode in San Francisco this week. Dubbed Tomorrow Never Knows, the new studio aims to use virtual and augmented reality as well as other emerging technologies including artificial intelligence for groundbreaking storytelling projects, said co-founder and CEO Nathan Brown in an interview with Variety this week.

“The thesis behind the company is to consistently violate the limits of storytelling, forcing the creation of new tools, methodologies and workflow and to do this intentionally so we create original creative and technology IP,” he said.

Before founding Tomorrow Never Knows, Brown co-founded Discovery VR, which has become one of the most ambitious network-backed VR outlets. Also hailing from Discovery VR is Tomorrow Never Knows co-founder Tom Lofthouse. They are joined by Gabo Arora, whose previous work as the founder of Lightshed included VR documentaries like “Clouds Over Sidra” and “Waves of Grace,” as well as Oculus Story Studio co-founder Sachka Unseld, the director of the Emmy Award-winning VR animation short “Henry” and the Emmy-nominated VR film “Dear Angelica.”

Read more