Black hole and Big Bang singularities break our best theory of gravity. A trilogy of theorems hints that physicists must go to the ends of space and time to find a fix.
In his stories, Han Song explores the disorientation accompanying China’s modernization, sometimes writing of unthinkable things that later came true.
A new physics paper takes a step toward creating a long-sought “theory of everything” by uniting gravity with the quantum world. However, the new theory remains far from being proven observationally.
“The new model can account for both structure formation and stability, and the key observational properties of the expansion of the universe at large, by enlisting density singularities in time that uniformly affect all space to replace conventional dark matter and dark energy,” research author Richard Lieu, a physics professor at The University of Alabama in Huntsville, said in a statement.
The dark universe is poses such a huge conundrum for scientists because it suggests that only 5% of the matter and energy in the cosmos comprises what we see around us on a day-to-day basis in stars, planets, moons, our bodies — and everything else, really.
In other words, we have no idea what the other 95% of the cosmos is.
The researcher added that with better data on the horizon, including the first public data on galaxy clustering from DESI released last week, the team will re-apply their methods, compare their results with their current findings, and detect any statistically significant differences.
“I think there are more questions than answers at this point,” Chen said. “This research certainly enforces the idea that different cosmological datasets are beginning to be in tension when interpreted within the standard Λ CDM model of cosmology.”
Almost every galaxy hosts a supermassive black hole at its center. When galaxies merge, the two black holes spiral in closer to each other and eventually merge through gravitational-wave emission. Within a few billion years, this process will be featured close to home as our own Milky-Way will collide with its nearest massive neighbor, the Andromeda galaxy.
If the two black holes have different masses, the emission of gravitational waves is asymmetric, causing the merger product to recoil. The intense burst of gravitational waves in a preferred direction during the final plunge of the two black holes towards each other, kicks the remnant black hole in the opposite direction through the rocket effect. The end result is that gravitational waves propel the black hole remnant to speeds of up to a few percent of the speed of light. The recoiling black hole behaves like the payload of a rocket powered by gravitational waves.
In 2007, I published a single-authored paper in the prestigious journal Physical Review Letters, suggesting that a gravitational-wave recoil could displace a black hole from the galactic center and endow it with fast motion relative to the background stars. If the kick is modest, dynamical friction on the background gas or stars would eventually return the black hole back to the center.
Make a donation to Closer To Truth to help us continue exploring the world’s deepest questions without the need for paywalls: https://shorturl.at/OnyRq.
For subscriber-only exclusives, register for a free membership today: https://bit.ly/3He94Ns.
Mathematics is like nothing else. The truths of math seem to be unrelated to anything else—independent of human beings, independent of the universe. The sum of 2 + 3 = 5 cannot not be true; this means that 3 + 2 = 5 would be true even if there were never any human beings, even if there were never a universe! When then, deeply, is mathematics?
Support the show with Closer To Truth merchandise: https://bit.ly/3P2ogje.
Watch more interviews on mathematics: https://bit.ly/48H9RS7
Mark Balaguer is Professor of Philosophy at California State University, Los Angeles. His major book is Platonism and Anti-Platonism in Mathematics.
How can seasons on Saturn’s largest moon, Titan, influence its atmosphere? This is what a recent study published in The Planetary Science Journal hop | Space
Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive (that is, rewarding) outcome. The study of how organisms learn from experience to correctly anticipate rewards has been a productive research field for well over a century, since Ivan Pavlov’s seminal psychological work. In his most famous experiment, dogs were trained to expect food some time after a buzzer sounded. These dogs began salivating as soon as they heard the sound, before the food had arrived, indicating they’d learned to predict the reward. In the original experiment, Pavlov estimated the dogs’ anticipation by measuring the volume of saliva they produced. But in recent decades, scientists have begun to decipher the inner workings of how the brain learns these expectations. Meanwhile, in close contact with this study of reward learning in animals, computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.
The contribution of our new work, published in Nature (PDF), is finding that a recent development in computer science – which yields significant improvements in performance on reinforcement learning problems – may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brain’s dopamine system, with potential implications for learning and motivation disorders.
Reinforcement learning is one of the oldest and most powerful ideas linking neuroscience and AI. In the late 1980s, computer science researchers were trying to develop algorithms that could learn how to perform complex behaviours on their own, using only rewards and punishments as a teaching signal. These rewards would serve to reinforce whatever behaviours led to their acquisition. To solve a given problem, it’s necessary to understand how current actions result in future rewards. For example, a student might learn by reinforcement that studying for an exam leads to better scores on tests. In order to predict the total future reward that will result from an action, it’s often necessary to reason many steps into the future.