Toggle light / dark theme

String theory has long been touted as physicists’ best candidate for describing the fundamental nature of the universe, with elementary particles and forces described as vibrations of tiny threads of energy. But in the early 21st century, it was realized that most of the versions of reality described by string theory’s equations cannot match up with observations of our own universe.

In particular, conventional ’s predictions are incompatible with the observation of dark energy, which appears to be causing our universe’s expansion to speed up, and with viable theories of quantum gravity, instead predicting a vast ‘swampland’ of impossible universes.

Now, a new analysis by FQxI physicist Eduardo Guendelman, of Ben-Gurion University of the Negev, in Israel, shows that an exotic subset of string models—in which the of strings is generated dynamically—could provide an escape route out of the string theory swampland.

One of the current hot research topics is the combination of two of the most recent technological breakthroughs: machine learning and quantum computing.

An experimental study shows that already small-scale quantum computers can boost the performance of algorithms.

This was demonstrated on a photonic quantum processor by an international team of researchers at the University of Vienna. The work, published in Nature Photonics, shows promising for optical quantum computers.

Empathy, the ability to understand what others are feeling and emotionally connect with their experiences, can be highly advantageous for humans, as it allows them to strengthen relationships and thrive in some professional settings. The development of tools for reliably measuring people’s empathy has thus been a key objective of many past psychology studies.

Most existing methods for measuring empathy rely on self-reports and questionnaires, such as the interpersonal reactivity index (IRI), the Empathy Quotient (EQ) test and the Toronto Empathy Questionnaire (TEQ). Over the past few years, however, some scientists have been trying to develop alternative techniques for measuring empathy, some of which rely on machine learning algorithms or other computational models.

Researchers at Hong Kong Polytechnic University have recently introduced a new machine learning-based video analytics that could be used to predict the empathy of people captured in . Their framework, introduced in a preprint paper published in SSRN, could prove to be a valuable tool for conducting organizational psychology research, as well as other empathy-related studies.

The Goldman-Hodgkin-Katz model has long guided transport analysis in nanopores and ion channels. This paper (with a companion paper in Physical Review Letters) revisits the model, showing that its constant electric field assumption leads to inconsistencies. A new self-consistent theory, inspired by reverse electrodialysis, offers a unified framework for ion transport.#AdvancingField #BiophysicsSpotlight

A research team led by Prof. Yong Gaochan from the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences has proposed a novel experimental method to probe the hyperon potential, offering new insights into resolving the longstanding “hyperon puzzle” in neutron stars. These findings were published in Physics Letters B and Physical Review C.

According to conventional theories, the extreme densities within neutron stars lead to the production of hyperons containing strange quarks (e.g., Λ particles). These hyperons significantly soften the equation of state (EoS) and reduce the maximum mass of neutron stars. However, have discovered neutron stars with masses approaching or even exceeding twice that of the sun, contradicting theoretical predictions.

Hyperon potential refers to the interaction potential between a hyperon and a nucleon. Aiming to resolve the “neutron star hyperon puzzle,” the study of hyperon potential has emerged as a frontier topic in the interdisciplinary field of nuclear and astrophysics. Currently, it is believed that if hyperon potentials exhibit stronger repulsion at high densities, they could counteract the softening effect of the EoS, thereby allowing massive to exist.

Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scal-ing properties, and limitations remain insufficiently understood. Current evaluations primarily fo-cus on established mathematical and coding benchmarks, emphasizing final answer accuracy. How-ever, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of composi-tional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: low-complexity tasks where standard models surprisingly outperform LRMs, medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities.

*Equal contribution. †Work done during an internship at Apple.

For decades, we’ve thought the control center of life lies in DNA. But a new scientific framework is emerging that challenges that idea, and suggests that vast portions of the genome are immaterial and lie outside the physical world. Today, physicist Dr. Brian Miller shares his perspective on the cutting-edge, potentially revolutionary research of mathematical biologist Dr. Richard Sternberg on the immaterial aspects of the genome. In this exchange, Dr. Miller shares several examples of the immaterial nature of life. These ideas point towards the earliest stages of the next great scientific revolution and have significant implications for the intelligent design debate.

Machine learning models have seeped into the fabric of our lives, from curating playlists to explaining hard concepts in a few seconds. Beyond convenience, state-of-the-art algorithms are finding their way into modern-day medicine as a powerful potential tool. In one such advance, published in Cell Systems, Stanford researchers are using machine learning to improve the efficacy and safety of targeted cell and gene therapies by potentially using our own proteins.

Most human diseases occur due to the malfunctioning of proteins in our bodies, either systematically or locally. Naturally, introducing a new therapeutic protein to cure the one that is malfunctioning would be ideal.

Although nearly all therapeutic protein antibodies are either fully human or engineered to look human, a similar approach has yet to make its way to other therapeutic proteins, especially those that operate in cells, such as those involved in CAR-T and CRISPR-based therapies. The latter still runs the risk of triggering immune responses. To solve this problem, researchers at the Gao Lab have now turned to machine learning models.

No image is infinitely sharp. For 150 years, it has been known that no matter how ingeniously you build a microscope or a camera, there are always fundamental resolution limits that cannot be exceeded in principle. The position of a particle can never be measured with infinite precision; a certain amount of blurring is unavoidable. This limit does not result from technical weaknesses, but from the physical properties of light and the transmission of information itself.

TU Wien (Vienna), the University of Glasgow and the University of Grenoble therefore posed the question: Where is the absolute limit of precision that is possible with optical methods? And how can this limit be approached as closely as possible?

And indeed, the international team succeeded in specifying a lowest limit for the theoretically achievable precision and in developing AI algorithms for that come very close to this limit after appropriate training. This strategy is now set to be employed in imaging procedures, such as those used in medicine. The study is published in the journal Nature Photonics.