Toggle light / dark theme

Black hole mergers could give rise to observable gravitational-wave tails

Black holes, regions of spacetime in which gravity is so strong that nothing can escape, are intriguing and extensively studied cosmological phenomena. Einstein’s general theory of relativity predicts that when two black holes merge, they emit ripples in spacetime known as gravitational waves.

Once the gravitational waves originating from black hole mergers fade, subtle hints of these waves could remain, known as late-time gravitational-wave tails. While the existence of these tails has been widely theorized about in the past, it was not yet conclusively confirmed.

Researchers at Niels Bohr Institute, University of Lisbon and other institutes worldwide recently performed black hole merger simulations based on Einstein’s equations, to further probe the existence of late-time gravitational-wave tails. Their simulations, outlined in a paper in Physical Review Letters, suggest that these tails not only exist, but could also have a larger amplitude than originally predicted and could thus be observed in future experiments.

Algorithms reveal how propane becomes propylene for everyday products

Countless everyday products, from plastic squeeze bottles to outdoor furniture, are derived by first turning propane into propylene.

A 2021 study in Science demonstrated that chemists could use tandem nanoscale catalysts to integrate multiple steps of the process into a single reaction—a way for companies to increase yield and save money. But it was unclear what was happening at the , making it difficult to apply the technique to other key industrial processes.

Researchers at the University of Rochester have developed algorithms that show the key atomic features driving the complex chemistry when the nanoscale catalysts turn propane into propylene.

Humans have ability to detect objects without touching them

In terms of objects, human touch has typically been understood to be limited to physical touch, where we detect objects through contact with our skin.

However, recent findings in animal have challenged this view. It is known that certain wading birds such as sandpipers and plovers, for example, use a form of ‘remote touch’ to detect prey hidden beneath the sand using their beaks.

Remote touch allows the detection of objects buried under granular materials, such as sand or soil, through subtle mechanical signals transmitted through the material when pressure is applied nearby.

The new study, published in IEEE International Conference on Development and Learning (ICDL), investigated whether humans share a similar capability to touch objects remotely.

The researchers asked 12 participants to move their fingers gently through sand to locate a hidden cube before physically touching it.

Remarkably, the results revealed a comparable ability to that seen in wading birds, despite humans lacking the specialised beak structures that enable this sense in birds.

By modelling the physical aspects of the remote touch phenomenon, the study found that human hands are remarkably sensitive, detecting the presence of buried objects by perceiving small displacements in the sand surrounding them with 70% precision within the expected detectable range.

First full simulation of 50-qubit universal quantum computer achieved

A research team at the Jülich Supercomputing Center, together with experts from NVIDIA, has set a new record in quantum simulation: for the first time, a universal quantum computer with 50 qubits has been fully simulated—a feat achieved on Europe’s first exascale supercomputer, JUPITER, inaugurated at Forschungszentrum Jülich in September.

The result surpasses the previous world record of 48 qubits, established by Jülich researchers in 2022 on Japan’s K computer. It showcases the immense computational power of JUPITER and opens new horizons for developing and testing . The research is published on the arXiv preprint server.

Quantum computer simulations are vital for developing future quantum systems. They allow researchers to verify experimental results and test new algorithms long before powerful quantum machines become reality. Among these are the Variational Quantum Eigensolver (VQE), which can model molecules and materials, and the Quantum Approximate Optimization Algorithm (QAOA), used for optimization problems in logistics, finance, and artificial intelligence.

APLab: On average a human being is capable of reading between 200 and 300 words per minute (wpm), while speed readers can achieve speeds of 400–700 WPM or higher

This pales into comparison to ChatGPT which can effectively read and analyze tens of thousands of words per second, since I process text computationally rather than linearly.

What if there was an invention or the concept of an invention that could be created to enhance a human beings capacity to read just as quickly as ChatGPT’s? I tasked ChatGPT with a step-by-step process to create that invention:

Here’s a concrete, neuroscience-grounded invention plan to push human reading toward “machine-speed”—while keeping comprehension and recall intact.

## 0) Core idea (one sentence)

Exploit the brain’s natural reading pipeline—VWFA → Wernicke (lexico-semantic) ↔ Broca (phonological sequencing) with eye-movement–driven coarse-to-fine vision—by timing text delivery to your saccade/ fixation cycle, suppressing unnecessary subvocalization, and entraining semantic parsing rhythms. ([PMC][1])

## 1) Hardware & sensing.

New AI framework can uncover space physics equations in raw data

Artificial intelligence (AI) systems, particularly artificial neural networks, have proved to be highly promising tools for uncovering patterns in large amounts of data that would otherwise be difficult to detect. Over the past decade, AI tools have been applied in a wide range of settings and fields.

Among its many possible applications, AI systems could be used to discover physical relationships and symbolic expressions (i.e., ) describing these relationships.

To uncover these formulas, physicists currently need to extensively analyze , thus automating this process could be highly advantageous.

Physicists Take the Imaginary Numbers Out of Quantum Mechanics

A century ago, the strange behavior of atoms and elementary particles led physicists to formulate a new theory of nature. That theory, quantum mechanics, found immediate success, proving its worth with accurate calculations of hydrogen’s emission and absorption of light. There was, however, a snag. The central equation of quantum mechanics featured the imaginary number i, the square root of −1.

Physicists knew i was a mathematical fiction. Real physical quantities like mass and momentum never yield a negative amount when squared. Yet this unreal number that behaves as i2 = −1 seemed to sit at the heart of the quantum world.

After deriving the i-riddled equation — essentially the law of motion for quantum entities — Erwin Schrödinger expressed the hope that it would be replaced by an entirely real version. (“There is undoubtedly a certain crudeness at the moment” in the equation’s form, he wrote in 1926.) Schrödinger’s distaste notwithstanding, i stuck around, and new generations of physicists took up his equation without much concern.

Introducing Nested Learning: A new ML paradigm for continual learning

The last decade has seen incredible progress in machine learning (ML), primarily driven by powerful neural network architectures and the algorithms used to train them. However, despite the success of large language models (LLMs), a few fundamental challenges persist, especially around continual learning, the ability for a model to actively acquire new knowledge and skills over time without forgetting old ones.

When it comes to continual learning and self-improvement, the human brain is the gold standard. It adapts through neuroplasticity — the remarkable capacity to change its structure in response to new experiences, memories, and learning. Without this ability, a person is limited to immediate context (like anterograde amnesia). We see a similar limitation in current LLMs: their knowledge is confined to either the immediate context of their input window or the static information that they learn during pre-training.

The simple approach, continually updating a model’s parameters with new data, often leads to “catastrophic forgetting” (CF), where learning new tasks sacrifices proficiency on old tasks. Researchers traditionally combat CF through architectural tweaks or better optimization rules. However, for too long, we have treated the model’s architecture (the network structure) and the optimization algorithm (the training rule) as two separate things, which prevents us from achieving a truly unified, efficient learning system.

Coordinating health AI to prevent defensive escalation

Artificial intelligence (AI) systems that can analyse medical images, records, and claims are becoming accessible to everyone. Although these systems outperform physicians at specific tasks, such as detecting cancer on CT scans, they are still imperfect. But as AI performance progresses from occasionally correct to reliably superior, there will be increasing pressure to conform to algorithmic outputs.

Physicists Just Ruled Out The Universe Being a Simulation

A question that has vexed physicists for the past century may finally have a solution – but perhaps not the one everyone was hoping for.

In a new, detailed breakdown of current theory, a team of physicists led by Mir Faizal of the University of British Columbia has shown that there is no universal “Theory of Everything” that neatly reconciles general relativity with quantum mechanics – at least, not an algorithmic one.

A natural consequence of this is that the Universe can’t be a simulation, since any such simulations would have to operate algorithmically.

/* */