Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

Brain chips get smarter. Elon Musk’s Neuralink gets competition

Recent advances suggest the technology is hitting its stride. The UC Davis team’s speech synthesis system represents a fundamental shift from previous approaches. Rather than translating brain signals into text and then synthesizing speech — a process that created significant delays — UC Davis’ system converts thoughts directly into sounds with near-instantaneous 10-millisecond latency.

H2]:text-3xl pb-8

Meanwhile, researchers at Carnegie Mellon achieved real-time control of individual robotic fingers using non-invasive EEG technology, wearing a cap that reads brain signals through the skull. This suggests that future brain interfaces might not require surgery at all for certain applications.

A signal that never repeats—how the brain creates bookmarks to map time

The brain doesn’t merely register time—it structures it, according to new research from the Kavli Institute for Systems Neuroscience published in Science.

The research team led by NTNU’s Nobel Laureates May-Britt and Edvard Moser, from the Kavli Institute for Systems Neuroscience, is already known for their discovery of the brain’s sense of place.

Now they have shown that the brain also weaves a tapestry of time: The brain segments and organizes events into experiences, placing unique bookmarks on them so that our lives don’t become a blurry stream, but rather a series of meaningful moments and memories we can revisit and learn from.

AI model analyzes speech to detect early neurological disorders with high accuracy

A research team led by Prof. Li Hai from the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has developed a novel deep learning framework that significantly improves the accuracy and interpretability of detecting neurological disorders through speech. The findings were recently published in Neurocomputing.

“A slight change in the way we speak might be more than just a slip of the tongue—it could be a from the brain,” said Prof. Hai, who led the team. “Our new model can detect early symptoms of neurological diseases such as Parkinson’s, Huntington’s, and Wilson disease, by analyzing voice recordings.”

Dysarthria is a common early symptom of various neurological disorders. Since speech abnormalities often reflect underlying neurodegenerative processes, voice signals have emerged as promising noninvasive biomarkers for the early screening and continuous monitoring of such conditions.

AI That Thinks Like Us: New Model Predicts Human Decisions With Startling Accuracy

A new AI model mimics human thinking with striking accuracy, even in unfamiliar scenarios. Researchers at Helmholtz Munich have created an advanced artificial intelligence system capable of mimicking human decision-making with impressive precision. The model, named Centaur, was trained using data f

AI Maps the Mood of Your City — And It’s Surprisingly Accurate

What if a city’s mood could be mapped like weather? Researchers at the University of Missouri are using AI to do exactly that—by analyzing geotagged Instagram posts and pairing them with Google Street View images, they’re building emotional maps of urban spaces.

These “sentiment maps” reveal how people feel in specific locations, helping city planners design areas that not only function better but also feel better. With potential applications ranging from safety to disaster response, this human-centered tech could soon become part of the city’s real-time dashboard.

Human-Centric City Vision