Toggle light / dark theme

Self-trained vision transformers mimic human gaze with surprising precision

Can machines ever see the world as we see it? Researchers have uncovered compelling evidence that vision transformers (ViTs), a type of deep-learning model that specializes in image analysis, can spontaneously develop human-like visual attention patterns when trained without labeled instructions.

Visual attention is the mechanism by which organisms, or (AI), filter out “visual noise” to focus on the most relevant parts of an image or view. While natural for humans, spontaneous learning has proven difficult for AI.

However, researchers have revealed, in their recent publication in Neural Networks, that with the right training experience, AI can spontaneously acquire human-like visual attention without being explicitly taught to do so.

Exploring the seas with self-powered jellyfish cyborgs

Unlike fish, jellyfish lack bones and possess a sole rudimentary nerve net, yet they can travel considerable distances with minimal energy expenditure. A jellyfish’s seemingly effortless glide through the water is thanks to a ring of muscle within its soft belly, which creates a simple jet that propels it forward. Scientists refer to this intrinsic capability as “embodied intelligence,” which suggests that the organism’s physical structure plays a role in problem-solving.

When harnessed, this locomotion provides an efficient means to monitor , track , and observe climate trends. “Jellyfish cyborgs” require minimal power and operate without engines, limiting the environmental impact associated with current methods of studying the vast expanse of the ocean.

In a new study, a research team, led by Dai Owaki, an associate professor in the Department of Robotics at Tohoku University’s Graduate School of Engineering, successfully modulated the swimming behavior of using gentle electric pulses. Moreover, they utilized a lightweight artificial intelligence (AI) model to predict the swimming speed of each jellyfish.

China’s UBTech takes direct shot at Tesla with $20K humanoid robot

UBTech’s consumer shift comes as it faces financial strain. The company lost over 1.1 billion yuan ($153 million) last year. Its stock has fallen 45% over the past 12 months in Hong Kong.

Still, Tam welcomes the pressure. “White-hot competition creates a lot of pressure on a single company, but for the whole industry, it helps preserve good companies and eliminate bad ones,” he told Bloomberg.

As humanoid robots inch closer to everyday life, UBTech’s shift to the home market marks a high-stakes bet.

Dopamine and temporal difference learning: A fruitful relationship between neuroscience and AI

Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive (that is, rewarding) outcome. The study of how organisms learn from experience to correctly anticipate rewards has been a productive research field for well over a century, since Ivan Pavlov’s seminal psychological work. In his most famous experiment, dogs were trained to expect food some time after a buzzer sounded. These dogs began salivating as soon as they heard the sound, before the food had arrived, indicating they’d learned to predict the reward. In the original experiment, Pavlov estimated the dogs’ anticipation by measuring the volume of saliva they produced. But in recent decades, scientists have begun to decipher the inner workings of how the brain learns these expectations. Meanwhile, in close contact with this study of reward learning in animals, computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.

The contribution of our new work, published in Nature (PDF), is finding that a recent development in computer science – which yields significant improvements in performance on reinforcement learning problems – may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brain’s dopamine system, with potential implications for learning and motivation disorders.

Reinforcement learning is one of the oldest and most powerful ideas linking neuroscience and AI. In the late 1980s, computer science researchers were trying to develop algorithms that could learn how to perform complex behaviours on their own, using only rewards and punishments as a teaching signal. These rewards would serve to reinforce whatever behaviours led to their acquisition. To solve a given problem, it’s necessary to understand how current actions result in future rewards. For example, a student might learn by reinforcement that studying for an exam leads to better scores on tests. In order to predict the total future reward that will result from an action, it’s often necessary to reason many steps into the future.

Cool computing—why the future of electronics could lie in the cold

Modern computer chips generate a lot of heat—and consume large amounts of energy as a result. A promising approach to reducing this energy demand could lie in the cold, as highlighted by a new Perspective article by an international research team coordinated by Qing-Tai Zhao from Forschungszentrum Jülich. Savings could reach as high as 80%, according to the researchers.

The work was conducted in collaboration with Prof. Joachim Knoch from RWTH Aachen University and researchers from EPFL in Switzerland, TSMC and National Yang Ming Chiao Tung University (NYCU) in Taiwan, and the University of Tokyo. In the article published in Nature Reviews Electrical Engineering, the authors outline how conventional CMOS technology can be adapted for cryogenic operation using and intelligent design strategies.

Data centers already consume vast amounts of electricity—and their are expected to double by 2030 due to the rising energy demands of artificial intelligence, according to the International Energy Agency (IEA). The computer chips that around the clock produce large amounts of heat and require considerable energy for cooling. But what if we flipped the script? What if the key to energy efficiency lay not in managing heat, but in embracing the cold?

/* */