At OpenAI’s Developer Day, CEO Sam Altman showed off apps that run entirely inside the chat window—a new effort to turn ChatGPT into a platform.
New artificial intelligence-generated images that appear to be one thing, but something else entirely when rotated, are helping scientists test the human mind.
The work by Johns Hopkins University perception researchers addresses a longstanding need for uniform stimuli to rigorously study how people mentally process visual information.
“These images are really important because we can use them to study all sorts of effects that scientists previously thought were nearly impossible to study in isolation—everything from size to animacy to emotion,” said first author Tal Boger, a Ph.D. student studying visual perception.
Alibaba’s CEO said the company would be pushing to develop advanced AI. Some in the U.S. have viewed China’s AI ambitions as more focused on applications of the technology.
Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., issued an advanced research concepts opportunity earlier this month (DARPA-EA-25–02-02) for the Hybridizing Biology and Robotics through Integration for Deployable Systems (HyBRIDS) program.
Bio-hybrid robotics
Bio-hybrid robotics combines living organisms and synthetic materials to create biorobots that compared to traditional robots can offer adaptability, self-healing, and energy efficiency.
Building a robot takes boatloads of technical skills, a whole lot of time, the right materials, of course – and maybe a little bit of organic life, maybe? Decades of science fiction have shaped our ideas of robots being non-biological entities. Think of batteries as the hearts, metal as the bones, and gears, pistons, and
A swarm of spherical rovers, blown by the wind like tumbleweeds, could enable large-scale and low-cost exploration of the Martian surface, according to results presented at the Joint Meeting of the Europlanet Science Congress and the Division for Planetary Sciences (EPSC-DPS) 2025.
Recent experiments in a state-of-the-art wind tunnel and field tests in a quarry demonstrate that the rovers could be set in motion and navigate over various terrains in conditions analogous to those found on Mars.
Tumbleweed rovers are lightweight, 5-meter-diameter spherical robots designed to harness the power of Martian winds for mobility. Swarms of the rovers could spread across the red planet, autonomously gathering environmental data and providing an unprecedented, simultaneous view of atmospheric and surface processes from different locations on Mars. A final, stationary phase would involve collapsing the rovers into permanent measurement stations dotted around the surface of Mars, providing long-term scientific measurements and potential infrastructure for future missions.
Fine-tuning large language models via reinforcement learning is computationally expensive, but researchers found a way to streamline the process.
What’s new: Qinsi Wang and colleagues at UC Berkeley and Duke University developed GAIN-RL, a method that accelerates reinforcement learning fine-tuning by selecting training examples automatically based on the model’s own internal signals, specifically the angles between vector representations of tokens. The code is available on GitHub.
Key insight: The cosine similarity between a model’s vector representations of input tokens governs the magnitude of gradient updates during training. Specifically, the sum of those similarities that enter a model’s classification layer, called the angle concentration, governs the magnitude of gradient updates. Examples with higher angle concentration produce larger gradient updates. The magnitude of a gradient update in turn determines the effectiveness of a given training example: The larger the update, the more the model learns. Prioritizing the most-effective examples before transitioning to less-effective ones enhances training efficiency while adding little preprocessing overhead.