Toggle light / dark theme

In a paper published in Nature Photonics, researchers from the University of Oxford, along with collaborators from the Universities of Muenster, Heidelberg, and Exeter, report on their development of integrated photonic-electronic hardware capable of processing three-dimensional (3D) data, substantially boosting data processing parallelism for AI tasks.

Conventional computer chip processing efficiency doubles every 18 months, but the required by modern AI tasks is currently doubling around every 3.5 months. This means that new computing paradigms are urgently needed to cope with the rising demand.

One approach is to use light instead of electronics—this allows multiple calculations to be carried out in parallel using different wavelengths to represent different sets of data. Indeed, in ground breaking work published in the journal Nature in 2021, many of the same authors demonstrated a form of integrated photonic processing chip that could carry out matrix vector multiplication (a crucial task for AI and machine learning applications) at speeds far outpacing the fastest electronic approaches. This work resulted in the birth of the photonic AI company, Salience Labs, a spin-out from the University of Oxford.

Scanning tunnelling microscopy images of simple glycoconjugates and glycosaminoglycans and their corresponding structures.

Scanning tunnelling microscopy has enabled researchers to directly image important sugar molecules attached to lipids and proteins. The experiments provide a picture at the single-molecule level of the sequences and locations of glycans bound to important biomolecules, offering new insight into the role they play in biology.

Summary: A new study delves into the enigmatic realm of deep neural networks, discovering that while these models can identify objects akin to human sensory systems, their recognition strategies diverge from human perception. When prompted to generate stimuli similar to a given input, the networks often produced unrecognizable or distorted images and sounds.

This indicates that neural networks cultivate their distinct “invariances”, differing starkly from human perceptual patterns. The research offers insights into evaluating models that mimic human sensory perceptions.

We construct a metrology experiment in which the metrologist can sometimes amend the input state by simulating a closed timelike curve, a worldline that travels backward in time. The existence of closed timelike curves is hypothetical. Nevertheless, they can be simulated probabilistically by quantum-teleportation circuits. We leverage such simulations to pinpoint a counterintuitive nonclassical advantage achievable with entanglement. Our experiment echoes a common information-processing task: A metrologist must prepare probes to input into an unknown quantum interaction. The goal is to infer as much information per probe as possible. If the input is optimal, the information gained per probe can exceed any value achievable classically. The problem is that, only after the interaction does the metrologist learn which input would have been optimal.

We live in an era of data deluge. The data centers that are operated to store and process this flood of data use a lot of electricity, which has been called a major contributor to environmental pollution. To overcome this situation, polygonal computing systems with lower power consumption and higher computation speed are being researched, but they are not able to handle the huge demand for data processing because they operate with electrical signals, just like conventional binary computing systems.

Dr. Do Kyung Hwang of the Center for Opto-Electronic Materials & Devices of the Korea Institute of Science and Technology (KIST) and Professor Jong-Soo Lee of the Department of Energy Science & Engineering at Daegu Gyeongbuk Institute of Science and Technology (DGIST) have jointly developed a new zero-dimensional and two-dimensional (2D-0D) semiconductor artificial junction material and observed the effect of a next-generation memory powered by light.

Transmitting data between the computing and storage parts of a multi-level computer using light rather than can dramatically increase processing speed.

One of the biggest challenges for earthquake early warning systems (EEW) is the lack of seismic stations located offshore of heavily populated coastlines, where some of the world’s most seismically active regions are located.

In a new study published in The Seismic Record, researchers show how unused telecommunications fiber can be transformed for offshore EEW.

Jiuxun Yin, a Caltech researcher now at SLB, and colleagues used 50 kilometers of a submarine telecom cable running between the United States and Chile, sampling at 8,960 channels along the cable for four days. The technique, called Distributed Acoustic Sensing or DAS, uses the tiny internal flaws in a long optical fiber as thousands of seismic sensors.

Half a century after its foundation, the neutral theory of molecular evolution continues to attract controversy. The debate has been hampered by the coexistence of different interpretations of the core proposition of the neutral theory, the ‘neutral mutation–random drift’ hypothesis. In this review, we trace the origins of these ambiguities and suggest potential solutions. We highlight the difference between the original, the revised and the nearly neutral hypothesis, and re-emphasise that none of them equates to the null hypothesis of strict neutrality. We distinguish the neutral hypothesis of protein evolution, the main focus of the ongoing debate, from the neutral hypotheses of genomic and functional DNA evolution, which for many species are generally accepted. We advocate a further distinction between a narrow and an extended neutral hypothesis (of which the latter posits that random non-conservative amino acid substitutions can cause non-ecological phenotypic divergence), and we discuss the implications for evolutionary biology beyond the domain of molecular evolution. We furthermore point out that the debate has widened from its initial focus on point mutations, and also concerns the fitness effects of large-scale mutations, which can alter the dosage of genes and regulatory sequences. We evaluate the validity of neutralist and selectionist arguments and find that the tested predictions, apart from being sensitive to violation of underlying assumptions, are often derived from the null hypothesis of strict neutrality, or equally consistent with the opposing selectionist hypothesis, except when assuming molecular panselectionism. Our review aims to facilitate a constructive neutralist–selectionist debate, and thereby to contribute to answering a key question of evolutionary biology: what proportions of amino acid and nucleotide substitutions and polymorphisms are adaptive?

Half a century after its foundation, the neutral theory of molecular evolution continues to attract controversy. The debate has been hampered by the coexistence of different interpretations of the core proposition of the neutral theory, the ‘neutral mutation–random drift’ hypothesis. In this review, we trace the origins of these ambiguities and suggest potential solutions. We highlight the difference between the original, the revised and the nearly neutral hypothesis, and re-emphasise that none of them equates to the null hypothesis of strict neutrality. We distinguish the neutral hypothesis of protein evolution, the main focus of the ongoing debate, from the neutral hypotheses of genomic and functional DNA evolution, which for many species are generally accepted. We advocate a further distinction between a narrow and an extended neutral hypothesis (of which the latter posits that random non-conservative amino acid substitutions can cause non-ecological phenotypic divergence), and we discuss the implications for evolutionary biology beyond the domain of molecular evolution. We furthermore point out that the debate has widened from its initial focus on point mutations, and also concerns the fitness effects of large-scale mutations, which can alter the dosage of genes and regulatory sequences. We evaluate the validity of neutralist and selectionist arguments and find that the tested predictions, apart from being sensitive to violation of underlying assumptions, are often derived from the null hypothesis of strict neutrality, or equally consistent with the opposing selectionist hypothesis, except when assuming molecular panselectionism. Our review aims to facilitate a constructive neutralist–selectionist debate, and thereby to contribute to answering a key question of evolutionary biology: what proportions of amino acid and nucleotide substitutions and polymorphisms are adaptive?