Toggle light / dark theme

We construct a metrology experiment in which the metrologist can sometimes amend the input state by simulating a closed timelike curve, a worldline that travels backward in time. The existence of closed timelike curves is hypothetical. Nevertheless, they can be simulated probabilistically by quantum-teleportation circuits. We leverage such simulations to pinpoint a counterintuitive nonclassical advantage achievable with entanglement. Our experiment echoes a common information-processing task: A metrologist must prepare probes to input into an unknown quantum interaction. The goal is to infer as much information per probe as possible. If the input is optimal, the information gained per probe can exceed any value achievable classically. The problem is that, only after the interaction does the metrologist learn which input would have been optimal.

We live in an era of data deluge. The data centers that are operated to store and process this flood of data use a lot of electricity, which has been called a major contributor to environmental pollution. To overcome this situation, polygonal computing systems with lower power consumption and higher computation speed are being researched, but they are not able to handle the huge demand for data processing because they operate with electrical signals, just like conventional binary computing systems.

Dr. Do Kyung Hwang of the Center for Opto-Electronic Materials & Devices of the Korea Institute of Science and Technology (KIST) and Professor Jong-Soo Lee of the Department of Energy Science & Engineering at Daegu Gyeongbuk Institute of Science and Technology (DGIST) have jointly developed a new zero-dimensional and two-dimensional (2D-0D) semiconductor artificial junction material and observed the effect of a next-generation memory powered by light.

Transmitting data between the computing and storage parts of a multi-level computer using light rather than can dramatically increase processing speed.

One of the biggest challenges for earthquake early warning systems (EEW) is the lack of seismic stations located offshore of heavily populated coastlines, where some of the world’s most seismically active regions are located.

In a new study published in The Seismic Record, researchers show how unused telecommunications fiber can be transformed for offshore EEW.

Jiuxun Yin, a Caltech researcher now at SLB, and colleagues used 50 kilometers of a submarine telecom cable running between the United States and Chile, sampling at 8,960 channels along the cable for four days. The technique, called Distributed Acoustic Sensing or DAS, uses the tiny internal flaws in a long optical fiber as thousands of seismic sensors.

Half a century after its foundation, the neutral theory of molecular evolution continues to attract controversy. The debate has been hampered by the coexistence of different interpretations of the core proposition of the neutral theory, the ‘neutral mutation–random drift’ hypothesis. In this review, we trace the origins of these ambiguities and suggest potential solutions. We highlight the difference between the original, the revised and the nearly neutral hypothesis, and re-emphasise that none of them equates to the null hypothesis of strict neutrality. We distinguish the neutral hypothesis of protein evolution, the main focus of the ongoing debate, from the neutral hypotheses of genomic and functional DNA evolution, which for many species are generally accepted. We advocate a further distinction between a narrow and an extended neutral hypothesis (of which the latter posits that random non-conservative amino acid substitutions can cause non-ecological phenotypic divergence), and we discuss the implications for evolutionary biology beyond the domain of molecular evolution. We furthermore point out that the debate has widened from its initial focus on point mutations, and also concerns the fitness effects of large-scale mutations, which can alter the dosage of genes and regulatory sequences. We evaluate the validity of neutralist and selectionist arguments and find that the tested predictions, apart from being sensitive to violation of underlying assumptions, are often derived from the null hypothesis of strict neutrality, or equally consistent with the opposing selectionist hypothesis, except when assuming molecular panselectionism. Our review aims to facilitate a constructive neutralist–selectionist debate, and thereby to contribute to answering a key question of evolutionary biology: what proportions of amino acid and nucleotide substitutions and polymorphisms are adaptive?

Half a century after its foundation, the neutral theory of molecular evolution continues to attract controversy. The debate has been hampered by the coexistence of different interpretations of the core proposition of the neutral theory, the ‘neutral mutation–random drift’ hypothesis. In this review, we trace the origins of these ambiguities and suggest potential solutions. We highlight the difference between the original, the revised and the nearly neutral hypothesis, and re-emphasise that none of them equates to the null hypothesis of strict neutrality. We distinguish the neutral hypothesis of protein evolution, the main focus of the ongoing debate, from the neutral hypotheses of genomic and functional DNA evolution, which for many species are generally accepted. We advocate a further distinction between a narrow and an extended neutral hypothesis (of which the latter posits that random non-conservative amino acid substitutions can cause non-ecological phenotypic divergence), and we discuss the implications for evolutionary biology beyond the domain of molecular evolution. We furthermore point out that the debate has widened from its initial focus on point mutations, and also concerns the fitness effects of large-scale mutations, which can alter the dosage of genes and regulatory sequences. We evaluate the validity of neutralist and selectionist arguments and find that the tested predictions, apart from being sensitive to violation of underlying assumptions, are often derived from the null hypothesis of strict neutrality, or equally consistent with the opposing selectionist hypothesis, except when assuming molecular panselectionism. Our review aims to facilitate a constructive neutralist–selectionist debate, and thereby to contribute to answering a key question of evolutionary biology: what proportions of amino acid and nucleotide substitutions and polymorphisms are adaptive?

ABSTRACT. The production mechanism of repeating fast radio bursts (FRBs) is still a mystery, and correlations between burst occurrence times and energies may provide important clues to elucidate it. While time correlation studies of FRBs have been mainly performed using wait time distributions, here we report the results of a correlation function analysis of repeating FRBs in the 2D space of time and energy. We analyse nearly 7,000 bursts reported in the literature for the three most active sources of FRB 20121102A, 20201124A, and 20220912A, and find the following characteristics that are universal in the three sources. A clear power-law signal of the correlation function is seen, extending to the typical burst duration (∼ 10 msec) towards shorter time intervals (Δt). The correlation function indicates that every single burst has about a 10–60 per cent chance of producing an aftershock at a rate decaying by a power law as ∝ (Δt)−p with p = 1.5–2.5, like the Omori–Utsu law of earthquakes. The correlated aftershock rate is stable regardless of source activity changes, and there is no correlation between emitted energy and Δt. We demonstrate that all these properties are quantitatively common to earthquakes, but different from solar flares in many aspects, by applying the same analysis method for the data on these phenomena. These results suggest that repeater FRBs are a phenomenon in which energy stored in rigid neutron star crusts is released by seismic activity. This may provide a new opportunity for future studies to explore the physical properties of the neutron star crust.

The most comprehensive view of the history of the universe ever created has been produced by researchers at The Australian National University (ANU). The study also offers new ideas about how our universe may have started.

Lead author Honorary Associate Professor Charley Lineweaver from ANU said he set out wanting to understand where all the objects in the universe came from.

“When the universe began 13.8 billion years ago in a hot big bang, there were no objects like protons, atoms, people, planets, stars or galaxies. Now the universe is full of such objects,” he said.

An international team of scientists is rethinking the basic principles of radiation physics with the aim of creating super-bright light sources. In a new study published in Nature Photonics, researchers from the Instituto Superior Técnico (IST) in Portugal, the University of Rochester, the University of California, Los Angeles, and Laboratoire d’Optique Appliquée in France proposed ways to use quasiparticles to create light sources as powerful as the most advanced ones in existence today, but much smaller.

Quasiparticles are formed by many moving in sync. They can travel at any speed—even faster than light—and withstand intense forces, like those near a black hole.

“The most fascinating aspect of quasiparticles is their ability to move in ways that would be disallowed by the laws of physics governing individual particles,” says John Palastro, a senior scientist at the Laboratory for Laser Energetics, an assistant professor in the Department of Mechanical Engineering, and an associate professor at the Institute of Optics.