Toggle light / dark theme

The Future of AI-Generated Art Is Here: An Interview With Jordan Tanner

Unlike many of the so-called “artists” strewing junk around the halls of modern art museums, Jordan Tanner is actually pushing the frontiers of his craft. His eclectic portfolio includes vaporwave-inspired VR experiences, NFTs & 3D-printed figurines for Popular Front, and animated art for this very magazine. His recent AI-generated art made using OpenAI’s DALL-E software was called “STUNNING” by Lee Unkrich, the director of Coco and Toy Story 3.

We interviewed the UK-born, Israel-based artist about the imminent AI-generated art revolution and why all is not lost when it comes to the future of art. In Tanner’s eyes, AI-generated art is similar to having the latest, flashiest Nikon camera—it doesn’t automatically make you a professional photographer. Tanner also created a series of unique, AI-generated pieces for this interview which can be enjoyed below.

Thanks for talking to Countere, Jordan. Can you tell us a little about your background as an artist?

How image features influence reaction times

It’s an everyday scenario: you’re driving down the highway when out of the corner of your eye you spot a car merging into your lane without signaling. How fast can your eyes react to that visual stimulus? Would it make a difference if the offending car were blue instead of green? And if the color green shortened that split-second period between the initial appearance of the stimulus and when the eye began moving towards it (known to scientists as the saccade), could drivers benefit from an augmented reality overlay that made every merging vehicle green?

Qi Sun, a joint professor in Tandon’s Department of Computer Science and Engineering and the Center for Urban Science and Progress (CUSP), is collaborating with neuroscientists to find out.

He and his Ph.D. student Budmonde Duinkharjav—along with colleagues from Princeton, the University of North Carolina, and NVIDIA Research—recently authored the paper “Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency,” presenting a model that can be used to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Inspired by neuroscience, the model could ultimately have great implications for , telemedicine, e-sports, and in any other arena in which AR and VR are leveraged.

New chip-based beam steering device lays groundwork for smaller, cheaper lidar

Researchers have developed a new chip-based beam steering technology that provides a promising route to small, cost-effective and high-performance lidar (or light detection and ranging) systems. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is used in a wide range of applications such as autonomous driving, free-space optical communications, 3D holography, biomedical sensing and virtual reality.

Optica l beam steering is a key technology for lidar systems, but conventional mechanical-based beam steering systems are bulky, expensive, sensitive to vibration and limited in speed,” said research team leader Hao Hu from the Technical University of Denmark. “Although devices known as chip-based optical phased arrays (OPAs) can quickly and precisely steer light in a non-mechanical way, so far, these devices have had poor beam quality and a field of view typically below 100 degrees.”

In Optica, Hu and co-author Yong Liu describe their new chip-based OPA that solves many of the problems that have plagued OPAs. They show that the device can eliminate a key optical artifact known as aliasing, achieving beam steering over a large field of view while maintaining high beam quality, a combination that could greatly improve lidar systems.

Metaverse Headsets and Smart Glasses are the Next-gen Data Stealers

View insights.


In a paper distributed via ArXiv, titled “Exploring the Unprecedented Privacy Risks of the Metaverse,” boffins at UC Berkeley in the US and the Technical University of Munich in Germany play-tested an “escape room” virtual reality (VR) game to better understand just how much data a potential attacker could access. Through a 30-person study of VR usage, the researchers – Vivek Nair (UCB), Gonzalo Munilla Garrido (TUM), and Dawn Song (UCB) – created a framework for assessing and analyzing potential privacy threats. They identified more than 25 examples of private data attributes available to potential attackers, some of which would be difficult or impossible to obtain from traditional mobile or web applications. The metaverse that is rapidly becoming a part of our world has long been an essential part of the gaming community. Interaction-based games like Second Life, Pokemon Go, and Minecraft have existed as virtual social interaction platforms. The founder of Second Life, Philip Rosedale, and many other security experts have lately been vocal about Meta’s impact on data privacy. Since the core concept is similar, it is possible to determine the potential data privacy issues apparently within Meta.

There has been a buzz going around the tech market that by the end of 2022, the metaverse can revive the AR/VR device shipments and take it as high as 14.19 million units, compared to 9.86 million in 2021, indicating a year-over-year increase of about 35% to 36%. The AR/VR device market will witness an enormous boom in the market due to component shortages and the difficulty to develop new technologies. The growth momentum will also be driven by the increased demand for remote interactivity stemming from the pandemic. But what will happen when these VR or metaverse headsets start stealing your precious data? Not just headsets but smart glasses too are prime suspect when it comes to privacy concerns.

Several weeks ago, Facebook introduced a new line of smart glasses called Ray-Ban Stories, which can take photos, shoot 30-second videos, and post them on the owner’s Facebook feed. Priced at US$299 and powered by Facebook’s virtual assistant, the web-connected shades can also take phone calls and play music or podcasts.

Medicine and the metaverse: New tech allows doctors to travel inside of your body

Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.

The world of technology is rapidly shifting from flat media viewed in the third person to immersive media experienced in the first person. Recently dubbed “the metaverse,” this major transition in mainstream computing has ignited a new wave of excitement over the core technologies of virtual and augmented reality. But there is a third technology area known as telepresence that is often overlooked but will become an important part of the metaverse.

While virtual reality brings users into simulated worlds, telepresence (also called telerobotics) uses remote robots to bring users to distant places, giving them the ability to look around and perform complex tasks. This concept goes back to science fiction of the 1940s and a seminal short story by Robert A. Heinlein entitled Waldo. If we combine that concept with another classic sci-fi tale, Fantastic Voyage (1966), we can imagine tiny robotic vessels that go inside the body and swim around under the control of doctors who diagnose patients from the inside, and even perform surgical tasks.

Top Technologies that are Taking Us to Metaverse in 2022

View insights.


The existence of ethical concerns is precisely why it’s important for business owners to understand the different technologies driving the Metaverse forward and what impact they may have on users, the environment, and our society. By understanding these technologies, businesses can find new ways to enrich our society with constructive uses of virtual reality connectivity that enrich our world and keep the digital economy booming.

In addition, understanding these technologies is important because as more advanced techniques are developed for use in Metaverse projects, the average cost of US$48,000 for app design in the USA will undoubtedly go up. Business owners need to understand what they need to focus on when planning their next move.

Businesses also need to understand that as the landscape of the Metaverse evolves, the nature of the content will change as well. Creating quality content marketing strategies with these immersive, virtual environments in mind is essential as the industry moves forward.

Swave Photonics Sees Holograms Getting Real

Swave Photonics has designed holographic chips on a proprietary diffractive optics technology to “bring the metaverse to life.”


Can virtual reality become indistinguishable from actual reality? Swave Photonics, a spinoff of Imec and Vrije Universiteit Brussel, has designed holographic chips on a proprietary diffractive optics technology to “bring the metaverse to life.” The Leuven, Belgium–based startup has raised €7 million in seed funding to accelerate the development of its multi-patented Holographic eXtended Reality (HXR) technology.

“Our vision is to empower people to visualize the impossible, collaborate, and accomplish more,” Théodore Marescaux, CEO and founder of Swave Photonics, told EE Times Europe. “With our HXR technology, we want to make that extended reality practically indistinguishable from the real world.”

What does it mean to project images that are indistinguishable from reality? “It means a very wide field of view, colors, high dynamic range, the ability to move your head around an object and see it from different angles, and the ability to focus,” he said.

Development of high-performance, high-tension wearable displacement sensors

Wearable displacement sensors—which are attached to a human body, detect movements in real time and convert them into electrical signals—are currently being actively studied. However, research on tensile-capable displacement sensors has many limitations, such as low tensile properties and complex manufacturing processes.

If a sensor that can be easily manufactured with and tensile properties is developed, it can be attached to a , allowing large movements of joints or fingers to be used in various applications such as AR and VR. A research team led by Sung-Hoon Ahn, mechanical engineering professor at Seoul National University, has developed a piezoelectric strain sensor with high sensitivity and high stretchability based on kirigami design cutting.

In this research, a stretchable piezoelectric displacement sensor was manufactured and its performance was evaluated by applying the kirigami structure to a film-type piezoelectric material. Various sensing characteristics were shown according to the kirigami pattern, and higher sensitivity and tensile properties were shown compared to existing technologies. Wireless haptic gloves using VR technology were produced using the developed sensor, and a piano could be played successfully using them.

/* */