Toggle light / dark theme

On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI’s GPT-4 language model for designing training goals (called “reward functions”) to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.

“Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym,” writes Nvidia on its demonstration page, “Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space.

As the utility of AI systems has grown dramatically, so has their energy demand. Training new systems is extremely energy intensive, as it generally requires massive data sets and lots of processor time. Executing a trained system tends to be much less involved—smartphones can easily manage it in some cases. But, because you execute them so many times, that energy use also tends to add up.

Fortunately, there are lots of ideas on how to bring the latter energy use back down. IBM and Intel have experimented with processors designed to mimic the behavior of actual neurons. IBM has also tested executing neural network calculations in phase change memory to avoid making repeated trips to RAM.

Now, IBM is back with yet another approach, one that’s a bit of “none of the above.” The company’s new NorthPole processor has taken some of the ideas behind all of these approaches and merged them with a very stripped-down approach to running calculations to create a highly power-efficient chip that can efficiently execute inference-based neural networks. For things like image classification or audio transcription, the chip can be up to 35 times more efficient than relying on a GPU.

Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.

Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.

When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.

Oct 23 (Reuters) — Nvidia (NVDA.O) dominates the market for artificial intelligence computing chips. Now it is coming after Intel’s longtime stronghold of personal computers.

Nvidia has quietly begun designing central processing units (CPUs) that would run Microsoft’s (MSFT.O) Windows operating system and use technology from Arm Holdings (O9Ty. F)„ two people familiar with the matter told Reuters.

The AI chip giant’s new pursuit is part of Microsoft’s effort to help chip companies build Arm-based processors for Windows PCs. Microsoft’s plans take aim at Apple, which has nearly doubled its market share in the three years since releasing its own Arm-based chips in-house for its Mac computers, according to preliminary third-quarter data from research firm IDC.

‘Open source communication is a fundamental human right,’ Automattic CEO Matt Mullenweg says, and he’s buying a platform to help pull it off.

Automattic, the company that runs WordPress.com, Tumblr, Pocket Casts, and a number of other popular web properties, just made a different kind of acquisition: it’s buying Texts, a universal messaging app, for $50 million.

Texts is an app for all your messaging apps. You can use it to log in to WhatsApp, Instagram, LinkedIn, Signal, iMessage, and more and see and respond to all your messages in one place. (Beeper is another app doing similar things.) The app also offers some additional features like AI-generated responses and summaries, but its primary… More.


A less chaotic chat app is coming to a device near you.

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs… More.

D-ID, the Tel Aviv-based startup best known as the tech behind those viral videos of animated family photos, is bringing its AI video technology to a new mobile app, launching today. Originally available as a web platform, D-ID’s Creative Reality Studio allows users to upload a still image and script and then turn that into an AI-generated video. The technology can be used to create digital representations of themselves, historical figures, fictional characters, presenters or brand ambassadors.

Early use cases the company had been targeting involved corporate training and education, internal and external communication from companies, and product marketing and sales, TechCrunch previously reported.

Now available on mobile, users will download the D-ID app from the App Store or Google Play and then create an account or log in, if already registered. On the selection screen, you can either pick a premade “digital person” that D-ID provides or upload an image from your phone’s photo library. You’ll then enter the text you want the digital person to say, choosing from 119 languages, as well as pick between male and female voice options. You can also choose the tone of the speech — like cheerful, excited, friendly, hopeful, newscast, sad, shouting, terrified, unfriendly, whispering and others.

Training generative AI models is challenging. It requires an infrastructure that can move and process data with performance characteristics unheard of outside of traditional supercomputing environments. Nobody better understands the demands that AI puts on infrastructure than the service providers that specialize in the space.

Lambda and VAST Data have engaged in a new strategic partnership that brings the VAST Data Platform to Lambda. This follows similar announcements from CoreWeave and G42 Cloud, both of which unveiled similar relationships with VAST over the past few months. This makes VAST Data the top choice for dedicated AI service providers.


Lambda Labs and VAST Data have engaged in a new strategic partnership that brings the VAST Data Platform to Lambda Labs.

The company which describes itself as the data infrastructure company for AI, bagged a $249 million contract in 2022 to provide a range of AI tech to the US Department of Defence.

Traditionally, the United States has been viewed as the top dog in global military applications, but over the last three decades, it’s been facing competition from a strong opponent in the Indo-Pacific area. China has been bullishly making its space by modernizing its weapons and forces and denting the US’ dominance in developing advanced technologies.


Nastco/iStock.

Can the US come out on top in the AI arms race?

For the first time ever, researchers at the Surgical Robotics Laboratory of the University of Twente successfully made two microrobots work together to pick up, move and assemble passive objects in 3D environments. This achievement opens new horizons for promising biomedical applications.

Imagine you need surgery somewhere inside your body. However, the part that needs surgery is very difficult for a surgeon to reach. In the future, a couple of robots smaller than a grain of salt might go into your body and perform the surgery. These microrobots could work together to perform all kinds of complex tasks. “It’s almost like magic,” says Franco Piñan Basualdo, corresponding author of the publication.

Researchers from the University of Twente successfully exploited two of these 1-millimeter-sized magnetic microrobots to perform several operations. Like clockwork, the microrobots were able to pick up, move and assemble cubes. Unique to this achievement is the 3D environment in which the robots performed their tasks.