In this bold new future, we have discovered the best possible use for artificial intelligence: making more pigs.
Category: robotics/AI – Page 1,464


Meta Reality Labs Research: Codec Avatars 2.0 Approaching Complete Realism with Custom Chip
Researchers at Meta Reality Labs are reporting that their work on Codec Avatars 2.0 has reached a level where the avatars are approaching complete realism. The researchers created a prototype Virtual Reality headset that has a custom-built accelerator chip specifically designed to manage the AI processing capable of rendering Meta’s photorealistic Codec Avatars on standalone virtual reality headsets.
The prototype Virtual Reality avatars use very advanced machine learning techniques.
Meta first showcased the work on the sophisticated Codec Avatars far back in March 2019. The avatars are powered using multiple neural networks and are generated via a special capture rig that contains 171 cameras. After the avatars are generated, they are powered in real-time through a prototype virtual reality headset that has five cameras. Two cameras are internal viewing each eye while three are external viewing the lower face. It is though that such advanced and photoreal avatars may one day replace video conferencing.
The next frontier in robotics
After nine years working at NASA Jet Propulsion Laboratory, Oliver Toupet is developing cutting-edge AI algorithms that enable the self-driving zoox vehicle to understand and make decisions based on its surroundings, and to optimize trajectories to reach its destination safely and comfortably.
Learn why he says the work he’s doing at Zoox is, in some ways, more challenging than his previous work.
Zoox principal software engineer Olivier Toupet on company’s autonomous robotaxi technology.

A chip that can classify nearly 2 billion images per second
Artificial intelligence (AI) plays an important role in many systems, from predictive text to medical diagnoses. Inspired by the human brain, many AI systems are implemented based on artificial neural networks, where electrical equivalents of biological neurons are interconnected, trained with a set of known data, such as images, and then used to recognize or classify new data points.
In traditional neural networks used for image recognition, the image of the target object is first formed on an image sensor, such as the digital camera in a smart phone. Then, the image sensor converts light into electrical signals, and ultimately into the binary data, which can then be processed, analyzed, stored and classified using computer chips. Speeding up these abilities is key to improving any number of applications, such as face recognition, automatically detecting text in photos, or helping self-driving cars recognize obstacles.
While current, consumer-grade image classification technology on a digital chip can perform billions of computations per second, making it fast enough for most applications, more sophisticated image classification such as identifying moving objects, 3D object identification, or classification of microscopic cells in the body, are pushing the computational limits of even the most powerful technology. The current speed limit of these technologies is set by the clock-based schedule of computation steps in a computer processor, where computations occur one after another on a linear schedule.

Technique significantly boosts the speeds of programs that run in the Unix shell
Researchers have pioneered a technique that can dramatically accelerate certain types of computer programs automatically, while ensuring program results remain accurate.
Their system boosts the speeds of programs that run in the Unix shell, a ubiquitous programming environment created 50 years ago that is still widely used today. Their method parallelizes these programs, which means that it splits program components into pieces that can be run simultaneously on multiple computer processors.
This enables programs to execute tasks like web indexing, natural language processing, or analyzing data in a fraction of their original runtime.


Scientists found a new way to show us how the early universe formed
Understanding the early universe has been a goal for scientists for decades. And, now with NASA’s James Webb space telescope, and other technology, we’re finally making some decent strides. A new simulation on early galaxy formation could be another key stepping stone, too.
Researchers created the simulation using machine learning. It then completed over 100,000 hours of computations to create the one-of-a-kind simulation. The researchers named the algorithm responsible for the project Hydo-BAM. They published a paper with the simulation’s findings earlier this year.
Creating a simulation of early galaxy formation has allowed researchers to chart the earliest moments of our universe. These important moments began just after the Big Bang set everything into motion. Understanding these key moments of the formation of the early universe could help us better understand how galaxies form in the universe today.
Restoring a lost sense of touch | Podcast | Overheard at National Geographic
This is a fantastic podcast exploration of a rapidly maturing, wildley varied fields of science, the military, medicine, the industrialization, exploration, and colonization of our solar system, and the hope for, path to, and purpose of the successful creation of a posthuman, post scarcity future. Its a future destination for humanity that will require a seemless, successful integration of our human biology with artificial intelligence and advanced nonbiological — AND artificially biological — mechanical systems that in one way or another all pass through a very few neccessary technological achievements. In this case it is the seemless communication in both directions of the biological, in this specific case it’s the human sense of touch.
When Brandon Prestwood’s left hand was caught in an industrial conveyor belt six years ago, he lost his arm. Scientists are slowly unraveling the science of touch by trying to tap into the human nervous system and recreate the sensations of pressure for people like Prestwood. After an experimental surgery, Brandon’s prosthetic arm was upgraded with a rudimentary sense of touch—a major development in technology that could bring us all a little closer together.
➡ Subscribe: http://bit.ly/NatGeoSubscribe.
➡ Get more of Overheard at National Geographic here: https://bit.ly/OverheardPodcast.
➡ Catch up on all episodes of Overheard at National Geographic: https://on.natgeo.com/3n9D3cF
You can also listen to the Overheard at National Geographic podcast on these platforms:
➡ Apple: https://bit.ly/OverheardOnApple.
➡ Spotify: https://bit.ly/OverheardOnSpotify.
➡ Stitcher: https://bit.ly/OverheardOnStitcher.
➡ Google Podcasts: https://bit.ly/OverheardOnGoogle.
➡ iHeart Radio: https://bit.ly/OverheardOniHeart.
➡ Castbox: https://bit.ly/OverheardOnCastbox.
#NationalGeographic #Overheard #Podcast.