Toggle light / dark theme

The researcher added that with better data on the horizon, including the first public data on galaxy clustering from DESI released last week, the team will re-apply their methods, compare their results with their current findings, and detect any statistically significant differences.

“I think there are more questions than answers at this point,” Chen said. “This research certainly enforces the idea that different cosmological datasets are beginning to be in tension when interpreted within the standard Λ CDM model of cosmology.”

Almost every galaxy hosts a supermassive black hole at its center. When galaxies merge, the two black holes spiral in closer to each other and eventually merge through gravitational-wave emission. Within a few billion years, this process will be featured close to home as our own Milky-Way will collide with its nearest massive neighbor, the Andromeda galaxy.

If the two black holes have different masses, the emission of gravitational waves is asymmetric, causing the merger product to recoil. The intense burst of gravitational waves in a preferred direction during the final plunge of the two black holes towards each other, kicks the remnant black hole in the opposite direction through the rocket effect. The end result is that gravitational waves propel the black hole remnant to speeds of up to a few percent of the speed of light. The recoiling black hole behaves like the payload of a rocket powered by gravitational waves.

In 2007, I published a single-authored paper in the prestigious journal Physical Review Letters, suggesting that a gravitational-wave recoil could displace a black hole from the galactic center and endow it with fast motion relative to the background stars. If the kick is modest, dynamical friction on the background gas or stars would eventually return the black hole back to the center.

Make a donation to Closer To Truth to help us continue exploring the world’s deepest questions without the need for paywalls: https://shorturl.at/OnyRq.

For subscriber-only exclusives, register for a free membership today: https://bit.ly/3He94Ns.

Mathematics is like nothing else. The truths of math seem to be unrelated to anything else—independent of human beings, independent of the universe. The sum of 2 + 3 = 5 cannot not be true; this means that 3 + 2 = 5 would be true even if there were never any human beings, even if there were never a universe! When then, deeply, is mathematics?

Support the show with Closer To Truth merchandise: https://bit.ly/3P2ogje.

Watch more interviews on mathematics: https://bit.ly/48H9RS7

Mark Balaguer is Professor of Philosophy at California State University, Los Angeles. His major book is Platonism and Anti-Platonism in Mathematics.

Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive (that is, rewarding) outcome. The study of how organisms learn from experience to correctly anticipate rewards has been a productive research field for well over a century, since Ivan Pavlov’s seminal psychological work. In his most famous experiment, dogs were trained to expect food some time after a buzzer sounded. These dogs began salivating as soon as they heard the sound, before the food had arrived, indicating they’d learned to predict the reward. In the original experiment, Pavlov estimated the dogs’ anticipation by measuring the volume of saliva they produced. But in recent decades, scientists have begun to decipher the inner workings of how the brain learns these expectations. Meanwhile, in close contact with this study of reward learning in animals, computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.

The contribution of our new work, published in Nature (PDF), is finding that a recent development in computer science – which yields significant improvements in performance on reinforcement learning problems – may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brain’s dopamine system, with potential implications for learning and motivation disorders.

Reinforcement learning is one of the oldest and most powerful ideas linking neuroscience and AI. In the late 1980s, computer science researchers were trying to develop algorithms that could learn how to perform complex behaviours on their own, using only rewards and punishments as a teaching signal. These rewards would serve to reinforce whatever behaviours led to their acquisition. To solve a given problem, it’s necessary to understand how current actions result in future rewards. For example, a student might learn by reinforcement that studying for an exam leads to better scores on tests. In order to predict the total future reward that will result from an action, it’s often necessary to reason many steps into the future.

Amidst the continued struggle to treat non-small-cell lung cancer, a new study led by Stanford University scientists suggests that a patient’s response to immunotherapy may hinge on how immune cells cluster around tumors. Their results reveal that spatial arrangements of certain immune cells within tumors can serve as powerful predictors of treatment response, surpassing existing biomarker tests.

Lung cancer leads global cancer mortality, and non-small-cell variants make up more than 80% of cases. Immune checkpoint inhibitors have transformed therapy yet help only 27–45% of recipients.

Reliable predictive biomarkers for immunotherapy response have eluded clinicians, who currently rely on PD-L1 immunohistochemistry, tumor mutational burden, and microsatellite stability tests, each offering modest predictive performance across trials and are prone to inconsistency.

From smartphones and TVs to credit cards, technologies that manipulate light are deeply embedded in our daily lives, many of which are based on holography. However, conventional holographic technologies have faced limitations, particularly in displaying multiple images on a single screen and in maintaining high-resolution image quality.

Recently, a research team led by Professor Junsuk Rho at POSTECH (Pohang University of Science and Technology) has developed a groundbreaking metasurface technology that can display up to 36 on a surface thinner than a . This research has been published in Advanced Science.

This achievement is driven by a special nanostructure known as a metasurface. Hundreds of times thinner than a human hair, the metasurface is capable of precisely manipulating light as it passes through. The team fabricated nanometer-scale pillars using silicon nitride, a material known for its robustness and excellent optical transparency. These pillars, referred to as meta-atoms, allow for fine control of light on the metasurface.

The precise measurement of states in atomic and molecular systems can help to validate fundamental physics theories and their predictions. Among the various platforms that can help to validate theoretical predictions are so-called diatomic molecular hydrogen ions (MHI), molecular ions that consist of two hydrogen nuclei (i.e., protons or their isotopes) and a single electron.

Compared to , these molecular ions have a more complex internal structure, as they contain two nuclei instead of one. Even when they are in their lowest possible electronic energy level (i.e., the electron’s ground state), these two nuclei can still rotate and vibrate, producing a wide range of rovibrational states.

Researchers at the Max Planck Institute for Nuclear Physics recently introduced a new method to precisely control and non-destructively measure the rovibrational ground state of a single molecular hydrogen ion in a Penning trap (i.e., a device that confines charged particles using static electric and magnetic fields).

Modern computer chips generate a lot of heat—and consume large amounts of energy as a result. A promising approach to reducing this energy demand could lie in the cold, as highlighted by a new Perspective article by an international research team coordinated by Qing-Tai Zhao from Forschungszentrum Jülich. Savings could reach as high as 80%, according to the researchers.

The work was conducted in collaboration with Prof. Joachim Knoch from RWTH Aachen University and researchers from EPFL in Switzerland, TSMC and National Yang Ming Chiao Tung University (NYCU) in Taiwan, and the University of Tokyo. In the article published in Nature Reviews Electrical Engineering, the authors outline how conventional CMOS technology can be adapted for cryogenic operation using and intelligent design strategies.

Data centers already consume vast amounts of electricity—and their are expected to double by 2030 due to the rising energy demands of artificial intelligence, according to the International Energy Agency (IEA). The computer chips that around the clock produce large amounts of heat and require considerable energy for cooling. But what if we flipped the script? What if the key to energy efficiency lay not in managing heat, but in embracing the cold?