Toggle light / dark theme

Deep Dive: Why 3D reconstruction may be the next tech disruptor

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Artificial intelligence (AI) systems must understand visual scenes in three dimensions to interpret the world around us. For that reason, images play an essential role in computer vision, significantly affecting quality and performance. Unlike the widely available 2D data, 3D data is rich in scale and geometry information, providing an opportunity for a better machine-environment understanding.

Data-driven 3D modeling, or 3D reconstruction, is a growing computer vision domain increasingly in demand from industries including augmented reality (AR) and virtual reality (VR). Rapid advances in implicit neural representation are also opening up exciting new possibilities for virtual reality experiences.

Does AI need a body? | John Carmack and Lex Fridman

Lex Fridman Podcast full episode: https://www.youtube.com/watch?v=I845O57ZSy4
Please support this podcast by checking out our sponsors:
- InsideTracker: https://insidetracker.com/lex to get 20% off.
- Indeed: https://indeed.com/lex to get $75 credit.
- Blinkist: https://blinkist.com/lex and use code LEX to get 25% off premium.
- Eight Sleep: https://www.eightsleep.com/lex and use code LEX to get special savings.
- Athletic Greens: https://athleticgreens.com/lex and use code LEX to get 1 month of fish oil.

GUEST BIO:
John Carmack is a legendary programmer, co-founder of id Software, and lead programmer of many revolutionary video games including Wolfenstein 3D, Doom, Quake, and the Commander Keen series. He is also the founder of Armadillo Aerospace, and for many years the CTO of Oculus VR.

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

SOCIAL:
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman.
- Medium: https://medium.com/@lexfridman.
- Reddit: https://reddit.com/r/lexfridman.
- Support on Patreon: https://www.patreon.com/lexfridman

The Power of Brain-Computer Interfaces | TVS

A Brain-Computer Interface (BCI) is a promising technology that has received increased attention in recent years. BCIs create a direct link from your brain to a computer. This technology has applications to many industries and sectors of our life. BCIs redefine how we approach medical treatment and communication for individuals with various conditions or injuries. BCIs also have applications in entertainment, specifically video games and VR. From being able to control a prosthetic limb with your mind, to being able to play a video game with your mind—the potential of BCIs are endless.

What are your thoughts on Brain-Computer Interfaces? Let us know!
Any disruptive technologies you would like us to cover? Dm us on our Instagram (@toyvirtualstructures).
—————–
Check out our curated playlists:
https://www.youtube.com/channel/UCr5Akn6LhGDin7coWM7dfUg/playlists.
—————–
Media Used:

Tom Oxley | TED

—————–
Want more content?
Check us out at:
Instagram: @toyvirtualstructures.
Twitter: @web3tvs

Researchers find way to shrink a VR headset down to normal glasses size

Ignore the ribbons, this is a very promising breakthrough for VR.


Researchers from Stanford University and Nvidia have teamed up to help develop VR glasses that look a lot more like regular spectacles. Okay, they are rather silly looking due to the ribbons extended from either eye, but they’re much, much flatter and compact than your usual goggle-like virtual reality headsets today.

“A major barrier to widespread adoption of VR technology, however, is the bulky form factor of existing VR displays and the discomfort associated with that,” the research paper published at Siggraph 2022 (opens in new tab) says.

The Holographic Principle, Quantum Mechanics, and Simulated Reality (SR)

TWITTER https://twitter.com/Transhumanian.
PATREON https://www.patreon.com/transhumania.
BITCOIN 14ZMLNppEdZCN4bu8FB1BwDaxbWteQKs8i.
BITCOIN CASH 1LhXJjN4FrfJh8LywR3dLG2uGXSaZjey9f.
ETHEREUM 0x1f89b261562C8D4C14aA01590EB42b2378572164
LITECOIN LdB94n8sTUXBto5ZKt82YhEsEmxomFGz3j.
CHAINLINK 0xDF560E12fF416eC2D4BAECC66E323C56af2f6666.

KEYWORDS:
Scienc, Technology, Philosophy, Futurism, Simulation, Simulationism, Ockham’s Razor, Argument, Hypothesis, Anthropic Principle, Holographic Principle, Holographic Universe theory, Brain in a Vat, Brain in a Jar, Matrix, Inception, Hologram, Artificial Intelligence, Vocaloid Hologram, Reality, Ontology, Epistemology, Elon Musk, Solipsism, Illusion, Renee Descartes, George Berkeley, Materialism, Idealism, Solipsism, Cogito Ergo Sum, Esse es Percepi, Gilbert Harman, Hillary Putnam, Robert Nozick, The Experience Machine, Multiverse, Omniverse, Black Hole Hologram, Event Horizon, Singularity, Moore’s Law, Black Dual Linear Error Correcting Code, Jim James Sylvester Gates, Neil DeGrasse Tyson, Plato’s Cave, Quantum Mechanics, Schrondinger’s Cat, Observer Effect, Double-Slit Experiment, Heisenberg Uncertainty Principle, Artificial Intelligence, Quantum Superdeterminism, Free Will, Albert Einstein, Stephen Hawking, Quantum Gravity Research, Eugene Vignor, Consciousness, Fermi Paradox, SETI (search for extraterrestrial intelligence), Drake Equation, Alpha Centauri, Wave Function Collapse, Video Games, VR, Virtual Reality, God, Theology, Fine-Tuning Argument, Teleological Argument, Simulationism, Mormonism, LDS, Heavenly Father, Glitch, Dyson Sphere.

The Future of AI-Generated Art Is Here: An Interview With Jordan Tanner

Unlike many of the so-called “artists” strewing junk around the halls of modern art museums, Jordan Tanner is actually pushing the frontiers of his craft. His eclectic portfolio includes vaporwave-inspired VR experiences, NFTs & 3D-printed figurines for Popular Front, and animated art for this very magazine. His recent AI-generated art made using OpenAI’s DALL-E software was called “STUNNING” by Lee Unkrich, the director of Coco and Toy Story 3.

We interviewed the UK-born, Israel-based artist about the imminent AI-generated art revolution and why all is not lost when it comes to the future of art. In Tanner’s eyes, AI-generated art is similar to having the latest, flashiest Nikon camera—it doesn’t automatically make you a professional photographer. Tanner also created a series of unique, AI-generated pieces for this interview which can be enjoyed below.

Thanks for talking to Countere, Jordan. Can you tell us a little about your background as an artist?

How image features influence reaction times

It’s an everyday scenario: you’re driving down the highway when out of the corner of your eye you spot a car merging into your lane without signaling. How fast can your eyes react to that visual stimulus? Would it make a difference if the offending car were blue instead of green? And if the color green shortened that split-second period between the initial appearance of the stimulus and when the eye began moving towards it (known to scientists as the saccade), could drivers benefit from an augmented reality overlay that made every merging vehicle green?

Qi Sun, a joint professor in Tandon’s Department of Computer Science and Engineering and the Center for Urban Science and Progress (CUSP), is collaborating with neuroscientists to find out.

He and his Ph.D. student Budmonde Duinkharjav—along with colleagues from Princeton, the University of North Carolina, and NVIDIA Research—recently authored the paper “Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency,” presenting a model that can be used to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Inspired by neuroscience, the model could ultimately have great implications for , telemedicine, e-sports, and in any other arena in which AR and VR are leveraged.

New chip-based beam steering device lays groundwork for smaller, cheaper lidar

Researchers have developed a new chip-based beam steering technology that provides a promising route to small, cost-effective and high-performance lidar (or light detection and ranging) systems. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is used in a wide range of applications such as autonomous driving, free-space optical communications, 3D holography, biomedical sensing and virtual reality.

Optica l beam steering is a key technology for lidar systems, but conventional mechanical-based beam steering systems are bulky, expensive, sensitive to vibration and limited in speed,” said research team leader Hao Hu from the Technical University of Denmark. “Although devices known as chip-based optical phased arrays (OPAs) can quickly and precisely steer light in a non-mechanical way, so far, these devices have had poor beam quality and a field of view typically below 100 degrees.”

In Optica, Hu and co-author Yong Liu describe their new chip-based OPA that solves many of the problems that have plagued OPAs. They show that the device can eliminate a key optical artifact known as aliasing, achieving beam steering over a large field of view while maintaining high beam quality, a combination that could greatly improve lidar systems.

/* */