Toggle light / dark theme

NVIDIA Brings The Power Of Generative AI To The Edge

NVIDIA wants to turn the Jetson family of devices into powerful edge computing devices capable of running state-of-the-art foundation models. It’s also investing in frameworks that combine the power of robotics with generative AI.

Here are three significant investments from NVIDIA that transform the Jetson family of devices:

The Jetson Generative AI Lab is a collection of tutorials and walkthroughs for running popular generative AI models such as LLama2, Stable Diffusion and Segment Anything Model on Jetson devices. Developers can clone the GitHub repository to download the scripts needed to run the models and the associated applications on devices such as Jetson AGX Orin and Jetson Orini Nano.

Giant £2.2m Transformers-style robot could replace humans on building sites

A NEW generation of giant shape-shifting robots designed to work on building sites and disaster zones have been unveiled in Japan.

The Transformer-style bots, dubbed the Archax, can grow to nearly three times the height of a man on their four-wheeled legs.

Designed by Tsubame Industries in Tokyo, the machines can also change into different shapes to suit any situation.

California DMV Suspends Cruise’s Driverless Robotaxis, Effective Immediately

The abrupt reversal comes in the wake of several high-profile incidents involving Cruise’s autonomous vehicles, including a hit-and-run earlier this month in which a Cruise taxi dragged a pedestrian for 20 feet after they’d been launched into the taxi’s path by a separate car.

The hit-and-run driver remains at large.

Following that crash and a handful of others, Cruise agreed to reduce its autonomous fleet in San Francisco by 50%, capping the taxis to no more than 50 operating during the day and no more than 150 at night.

Forecasting the future of artificial intelligence with machine learning-based link prediction in an exponentially growing knowledge network

The number of publications in artificial intelligence (AI) has been increasing exponentially and staying on top of progress in the field is a challenging task. Krenn and colleagues model the evolution of the growing AI literature as a semantic network and use it to benchmark several machine learning methods that can predict promising research directions in AI.

Robots learn faster with AI boost from Eureka

Intelligent robots are reshaping our universe. In New Jersey’s Robert Wood Johnson University Hospital, AI-assisted robots are bringing a new level of security to doctors and patients by scanning every inch of the premises for harmful bacteria and viruses and disinfecting them with precise doses of germicidal ultraviolet light.

In agriculture, robotic arms driven by drones scan varying types of fruits and vegetables and determine when they are perfectly ripe for picking.

The Airspace Intelligence System AI Flyways takes over the challenging and often stressful tasks of flight dispatchers who must make last-minute flight pattern changes due to sudden extreme weather, depleted fuel supplies, mechanical problems or other emergencies. It optimizes solutions, is safer, saves time and is cost-efficient.

Eureka: With GPT-4 overseeing training, robots can learn much faster

On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI’s GPT-4 language model for designing training goals (called “reward functions”) to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.

“Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym,” writes Nvidia on its demonstration page, “Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space.

IBM has made a new, highly efficient AI processor

As the utility of AI systems has grown dramatically, so has their energy demand. Training new systems is extremely energy intensive, as it generally requires massive data sets and lots of processor time. Executing a trained system tends to be much less involved—smartphones can easily manage it in some cases. But, because you execute them so many times, that energy use also tends to add up.

Fortunately, there are lots of ideas on how to bring the latter energy use back down. IBM and Intel have experimented with processors designed to mimic the behavior of actual neurons. IBM has also tested executing neural network calculations in phase change memory to avoid making repeated trips to RAM.

Now, IBM is back with yet another approach, one that’s a bit of “none of the above.” The company’s new NorthPole processor has taken some of the ideas behind all of these approaches and merged them with a very stripped-down approach to running calculations to create a highly power-efficient chip that can efficiently execute inference-based neural networks. For things like image classification or audio transcription, the chip can be up to 35 times more efficient than relying on a GPU.

/* */