Emerging generalist models could overcome some limitations of first-generation machine-learning tools for clinical use.
The number of publications in artificial intelligence (AI) has been increasing exponentially and staying on top of progress in the field is a challenging task. Krenn and colleagues model the evolution of the growing AI literature as a semantic network and use it to benchmark several machine learning methods that can predict promising research directions in AI.
Intelligent robots are reshaping our universe. In New Jersey’s Robert Wood Johnson University Hospital, AI-assisted robots are bringing a new level of security to doctors and patients by scanning every inch of the premises for harmful bacteria and viruses and disinfecting them with precise doses of germicidal ultraviolet light.
In agriculture, robotic arms driven by drones scan varying types of fruits and vegetables and determine when they are perfectly ripe for picking.
The Airspace Intelligence System AI Flyways takes over the challenging and often stressful tasks of flight dispatchers who must make last-minute flight pattern changes due to sudden extreme weather, depleted fuel supplies, mechanical problems or other emergencies. It optimizes solutions, is safer, saves time and is cost-efficient.
On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI’s GPT-4 language model for designing training goals (called “reward functions”) to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.
“Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym,” writes Nvidia on its demonstration page, “Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space.
As the utility of AI systems has grown dramatically, so has their energy demand. Training new systems is extremely energy intensive, as it generally requires massive data sets and lots of processor time. Executing a trained system tends to be much less involved—smartphones can easily manage it in some cases. But, because you execute them so many times, that energy use also tends to add up.
Fortunately, there are lots of ideas on how to bring the latter energy use back down. IBM and Intel have experimented with processors designed to mimic the behavior of actual neurons. IBM has also tested executing neural network calculations in phase change memory to avoid making repeated trips to RAM.
Now, IBM is back with yet another approach, one that’s a bit of “none of the above.” The company’s new NorthPole processor has taken some of the ideas behind all of these approaches and merged them with a very stripped-down approach to running calculations to create a highly power-efficient chip that can efficiently execute inference-based neural networks. For things like image classification or audio transcription, the chip can be up to 35 times more efficient than relying on a GPU.
Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.
Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.
When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.
Oct 23 (Reuters) — Nvidia (NVDA.O) dominates the market for artificial intelligence computing chips. Now it is coming after Intel’s longtime stronghold of personal computers.
Nvidia has quietly begun designing central processing units (CPUs) that would run Microsoft’s (MSFT.O) Windows operating system and use technology from Arm Holdings (O9Ty. F)„ two people familiar with the matter told Reuters.
The AI chip giant’s new pursuit is part of Microsoft’s effort to help chip companies build Arm-based processors for Windows PCs. Microsoft’s plans take aim at Apple, which has nearly doubled its market share in the three years since releasing its own Arm-based chips in-house for its Mac computers, according to preliminary third-quarter data from research firm IDC.
‘Open source communication is a fundamental human right,’ Automattic CEO Matt Mullenweg says, and he’s buying a platform to help pull it off.
Automattic, the company that runs WordPress.com, Tumblr, Pocket Casts, and a number of other popular web properties, just made a different kind of acquisition: it’s buying Texts, a universal messaging app, for $50 million.
Texts is an app for all your messaging apps. You can use it to log in to WhatsApp, Instagram, LinkedIn, Signal, iMessage, and more and see and respond to all your messages in one place. (Beeper is another app doing similar things.) The app also offers some additional features like AI-generated responses and summaries, but its primary… More.
A less chaotic chat app is coming to a device near you.