Toggle light / dark theme

Eureka has also taught quadruped, dexterous hands, cobot arms and other robots to open drawers, use scissors, catch balls and nearly 30 different tasks. According to NVIDIA Research, the AI agent’s trial and error-based reward programs are 80 percent more effective than those written by human experts. This shift meant the robots’ performance also improved by over 50 percent. Eureka also self-evaluates based on training results, instructing changes in reward functions as it sees fit.

NVIDIA Research has published a library of its Eureka algorithms, encouraging others to try them out on NVIDIA Isaac Gym, the organization’s “physics simulation reference application for reinforcement learning research.”

The idea of robots teaching robots is seeing increased interest and success. A May 2023 paper published in the Transactions on Machine Learning Research journal presented a new system called SKILL (Shared Knowledge Lifelong Learning), which allowed AI systems to learn 102 different skills, including diagnosing diseases from chest X-rays and identifying species of flowers. The AIs shared their knowledge — acting as teachers in a way — with each other over a communication network and were able to master each of the 102 skills. Researchers at schools like MIT and the University of Bristol have also had success, specifically in using AI to teach robots how to manipulate objects.

The lightless deep sea is swirling with life.

New footage captured in the “twilight zone” — areas of the ocean starting at around 100 meters (330 feet) deep where sunlight can’t reach — reveals a world teeming with often otherworldly organisms: long chains of creatures, tentacled life, defensive ink blasted into the water, and beyond.

These creatures were filmed around the Geologist Seamounts just south of the Hawaiian islands. The expedition, funded by the National Oceanic and Atmospheric Administration’s ocean exploration division, allowed scientists to drop a sleek deep sea exploration vehicle, called Mesobot, into these dark waters. These remotely operated vehicles, or ROVs, can be intrusive to deep ocean life, but Mesobot, with a slim design and slow-moving propellers, is designed to avoid frightening wildlife away.

A new proposal spells out the very specific ways companies should evaluate AI security and enforce censorship in AI models.

Ever since the Chinese government passed a law on generative AI back in July, I’ve been wondering how exactly China’s censorship machine would adapt for the AI era.

Last week we got some clarity about what all this may look like in practice.

Hollywood actors are on strike over concerns about the use of AI, but for as little as $300, Meta and a company called Realeyes hired them to make avatars appear more human.

One evening in early September, T, a 28-year-old actor who asked to be identified by his first initial, took his seat in a rented Hollywood studio space in front of three cameras, a director, and a producer for a somewhat unusual gig.

The two-hour shoot produced footage that was not meant to be viewed by the public—at least, not a human public.

Dubbed NorthPole, it excels in terms of performance, energy, and area efficiency.

Artificial intelligence is an energy vampire that runs on substantial computational power. Running AI applications like behavior monitoring, facial recognition software, or live object tracking in real-time, a computing system with faster and more accurate inferences is required. For this to happen, a large AI model must work closely with the source of data.

This problem of moving large amounts of data between compute and memory started with one of the earliest electronic computers, the Electronic Discrete Variable Automatic Computer (EDVAC). The compute and memory of the system were based on differing technologies and had to be operated separately by necessity.

This also means faster robotics and self-driving cars.

Foxconn, the largest producer of iPhones, is joining hands with the biggest chipmaker in the world, NVIDIA, to develop artificial intelligence factories that will power a range of applications like self-driving cars, more generative AI tools, and robotic systems, said a press release.

Dubbed AI factories, they are data centers that will power a wide range of applications, including the digitalization of manufacturing and inspection workflows, the development of AI-powered electric vehicle and robotics platforms, and language-based generative AI services.

The team estimates that their hardware can outperform the best electronic processors by a factor of 100 in terms of energy efficiency and compute density.

A team of scientists from Oxford University and their partners from Germany and the UK have developed a new kind of AI hardware that uses light to process three-dimensional (3D) data. Based on integrated photonic-electronic chips, the hardware can perform complex calculations in parallel using different wavelengths and radio frequencies of light. The team claims their hardware can boost the data processing speed and efficiency for AI tasks by several orders of magnitude.


AI computing and processing power

The research published today in the journal Nature Photonics addresses the challenge of meeting modern AI applications’ increasing demand for computing power. The conventional computer chips, which rely on electronics, need help to keep up with the pace of AI innovation, which requires doubling the processing power every 3.5 months. The team says that using light instead of electronics offers a new way of computing that can overcome this bottleneck.