Toggle light / dark theme

Researchers map connections between the brain’s structure and function

Using an algorithm they call the Krakencoder, researchers at Weill Cornell Medicine are a step closer to unraveling how the brain’s wiring supports the way we think and act. The study, published June 5 in Nature Methods, used imaging data from the Human Connectome Project to align neural activity with its underlying circuitry.

Mapping how the brain’s anatomical connections and activity patterns relate to behavior is crucial not only for understanding how the brain works generally but also for identifying biomarkers of disease, predicting outcomes in neurological disorders and designing personalized interventions.

The brain consists of a complex network of interconnected neurons whose collective activity drives our behavior. The structural connectome represents the physical wiring of the brain, the map of how different regions are anatomically connected.

IBM’s Starling quantum computer: 20,000X faster than today’s quantum computers

IBM has just unveiled its boldest quantum computing roadmap yet: Starling, the first large-scale, fault-tolerant quantum computer—coming in 2029. Capable of running 20,000X more operations than today’s quantum machines, Starling could unlock breakthroughs in chemistry, materials science, and optimization.

According to IBM, this is not just a pie-in-the-sky roadmap: they actually have the ability to make Starling happen.

In this exclusive conversation, I speak with Jerry Chow, IBM Fellow and Director of Quantum Systems, about the engineering breakthroughs that are making this possible… especially a radically more efficient error correction code and new multi-layered qubit architectures.

We cover:
- The shift from millions of physical qubits to manageable logical qubits.
- Why IBM is using quantum low-density parity check (qLDPC) codes.
- How modular quantum systems (like Kookaburra and Cockatoo) will scale the technology.
- Real-world quantum-classical hybrid applications already happening today.
- Why now is the time for developers to start building quantum-native algorithms.

00:00 Introduction to the Future of Computing.
01:04 IBM’s Jerry Chow.
01:49 Quantum Supremacy.
02:47 IBM’s Quantum Roadmap.
04:03 Technological Innovations in Quantum Computing.
05:59 Challenges and Solutions in Quantum Computing.
09:40 Quantum Processor Development.
14:04 Quantum Computing Applications and Future Prospects.
20:41 Personal Journey in Quantum Computing.
24:03 Conclusion and Final Thoughts.

Out of the string theory swampland: New models may resolve problem that conflicts with dark energy

String theory has long been touted as physicists’ best candidate for describing the fundamental nature of the universe, with elementary particles and forces described as vibrations of tiny threads of energy. But in the early 21st century, it was realized that most of the versions of reality described by string theory’s equations cannot match up with observations of our own universe.

In particular, conventional ’s predictions are incompatible with the observation of dark energy, which appears to be causing our universe’s expansion to speed up, and with viable theories of quantum gravity, instead predicting a vast ‘swampland’ of impossible universes.

Now, a new analysis by FQxI physicist Eduardo Guendelman, of Ben-Gurion University of the Negev, in Israel, shows that an exotic subset of string models—in which the of strings is generated dynamically—could provide an escape route out of the string theory swampland.

Quantum machine learning: Small-scale photonic quantum processor can already outperform classical counterparts

One of the current hot research topics is the combination of two of the most recent technological breakthroughs: machine learning and quantum computing.

An experimental study shows that already small-scale quantum computers can boost the performance of algorithms.

This was demonstrated on a photonic quantum processor by an international team of researchers at the University of Vienna. The work, published in Nature Photonics, shows promising for optical quantum computers.

Novel analytics framework measures empathy of people captured in video recordings

Empathy, the ability to understand what others are feeling and emotionally connect with their experiences, can be highly advantageous for humans, as it allows them to strengthen relationships and thrive in some professional settings. The development of tools for reliably measuring people’s empathy has thus been a key objective of many past psychology studies.

Most existing methods for measuring empathy rely on self-reports and questionnaires, such as the interpersonal reactivity index (IRI), the Empathy Quotient (EQ) test and the Toronto Empathy Questionnaire (TEQ). Over the past few years, however, some scientists have been trying to develop alternative techniques for measuring empathy, some of which rely on machine learning algorithms or other computational models.

Researchers at Hong Kong Polytechnic University have recently introduced a new machine learning-based video analytics that could be used to predict the empathy of people captured in . Their framework, introduced in a preprint paper published in SSRN, could prove to be a valuable tool for conducting organizational psychology research, as well as other empathy-related studies.

Goldman-Hodgkin-Katz equation, reverse electrodialysis, and everything in between

The Goldman-Hodgkin-Katz model has long guided transport analysis in nanopores and ion channels. This paper (with a companion paper in Physical Review Letters) revisits the model, showing that its constant electric field assumption leads to inconsistencies. A new self-consistent theory, inspired by reverse electrodialysis, offers a unified framework for ion transport.#AdvancingField #BiophysicsSpotlight

Probing hyperon potential to resolve a longstanding puzzle in neutron stars

A research team led by Prof. Yong Gaochan from the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences has proposed a novel experimental method to probe the hyperon potential, offering new insights into resolving the longstanding “hyperon puzzle” in neutron stars. These findings were published in Physics Letters B and Physical Review C.

According to conventional theories, the extreme densities within neutron stars lead to the production of hyperons containing strange quarks (e.g., Λ particles). These hyperons significantly soften the equation of state (EoS) and reduce the maximum mass of neutron stars. However, have discovered neutron stars with masses approaching or even exceeding twice that of the sun, contradicting theoretical predictions.

Hyperon potential refers to the interaction potential between a hyperon and a nucleon. Aiming to resolve the “neutron star hyperon puzzle,” the study of hyperon potential has emerged as a frontier topic in the interdisciplinary field of nuclear and astrophysics. Currently, it is believed that if hyperon potentials exhibit stronger repulsion at high densities, they could counteract the softening effect of the EoS, thereby allowing massive to exist.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scal-ing properties, and limitations remain insufficiently understood. Current evaluations primarily fo-cus on established mathematical and coding benchmarks, emphasizing final answer accuracy. How-ever, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of composi-tional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: low-complexity tasks where standard models surprisingly outperform LRMs, medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities.

*Equal contribution. †Work done during an internship at Apple.

Physicist Brian Miller: The Non-Algorithmic Nature of Life

For decades, we’ve thought the control center of life lies in DNA. But a new scientific framework is emerging that challenges that idea, and suggests that vast portions of the genome are immaterial and lie outside the physical world. Today, physicist Dr. Brian Miller shares his perspective on the cutting-edge, potentially revolutionary research of mathematical biologist Dr. Richard Sternberg on the immaterial aspects of the genome. In this exchange, Dr. Miller shares several examples of the immaterial nature of life. These ideas point towards the earliest stages of the next great scientific revolution and have significant implications for the intelligent design debate.