Toggle light / dark theme

This approach significantly enhances performance, as observed in Atari video games and several other tasks involving multiple potential outcomes for each decision.

“They basically asked what happens if rather than just learning average rewards for certain actions, the algorithm learns the whole distribution, and they found it improved performance significantly,” explained Professor Drugowitsch.

In the latest study, Drugowitsch collaborated with Naoshige Uchida, a professor of molecular and cellular biology at Harvard University. The goal was to gain a better understanding of how the potential risks and rewards of a decision are weighed in the brain.

Molecular Dynamics (MD) simulation serves as a crucial technique across various disciplines including biology, chemistry, and material science1,2,3,4. MD simulations are typically based on interatomic potential functions that characterize the potential energy surface of the system, with atomic forces derived as the negative gradients of the potential energies. Subsequently, Newton’s laws of motion are applied to simulate the dynamic trajectories of the atoms. In ab initio MD simulations5, the energies and forces are accurately determined by solving the equations in quantum mechanics. However, the computational demands of ab initio MD limit its practicality in many scenarios. By learning from ab initio calculations, machine learning interatomic potentials (MLIPs) have been developed to achieve much more efficient MD simulations with ab initio-level accuracy6,7,8.

Despite their successes, the crucial challenge of implementing MLIPs is the distribution shift between training and test data. When using MLIPs for MD simulations, the data for inference are atomic structures that are continuously generated during simulations based on the predicted forces, and the training set should encompass a wide range of atomic structures to guarantee the accuracy of predictions. However, in fields such as phaseion9,10, catalysis11,12, and crystal growth13,14, the configurational space that needs to be explored is highly complex. This complexity makes it challenging to sample sufficient data for training and easy to make a potential that is not smooth enough to extrapolate to every relevant point. Consequently, a distribution shift between training and test datasets often occurs, which causes the degradation of test performance and leads to the emergence of unrealistic atomic structures, and finally the MD simulations collapse15.

What if time is not as set in stone it seems? Imagine that time could move forward or backward due to quantum-level processes rather than in a single direction. According to a recent study published in Scientific Reports, researchers at the University of Surrey have uncovered the intriguing discovery that some quantum systems have the potential to produce competing arrows of time.

Image Credit: amgun/Shutterstock.com

The arrow of time—the notion that time moves irrevocably from the past to the future—has baffled scholars for ages. The fundamental principles of physics do not favor one path over another, even though this appears to be evident in the reality humans experience. The equations are the same whether time goes forward or backward.

Not everyone is willing to passively accept the future AI companies are shaping.

In an aggressive response to AI companies like OpenAI, independent developers have created “tarpits” — malicious software designed to trap and confuse AI scrapers for months on end.

The goal? To make AI companies pay a higher price for their relentless data collection and, perhaps, to slow the rapid commercialization of AI-driven content generation.

Inspired by cybersecurity tactics originally used against spam, these digital snares lure AI crawlers into endless loops of fake data, slowing their operations and potentially corrupting their training models. One such tool, Nepenthes, forces scrapers into a maze of gibberish, while another, Iocaine, aims to poison AI models outright.

While critics argue that these efforts may have limited long-term impact—since AI companies are developing countermeasures—supporters see tarpits as a symbolic act of resistance against AI’s unchecked expansion.

With growing concerns over AI scraping depleting valuable online content and replacing human-created work with algorithm-generated material, these digital weapons offer a way for website owners to fight back.


Researchers, including those from the University of Tokyo, developed Deep Nanometry, an analytical technique combining advanced optical equipment with a noise removal algorithm based on unsupervised deep learning.

Deep Nanometry can analyze nanoparticles in medical samples at high speed, making it possible to accurately detect even trace amounts of rare particles. This has proven its potential for detecting indicating early signs of colon cancer, and it is hoped that it can be applied to other medical and industrial fields.

The body is full of smaller than cells. These include extracellular vesicles (EVs), which can be useful in early disease detection and also in drug delivery.

In a recent study, researchers developed a portable digital holographic camera system that can obtain full-color digital holograms of objects illuminated with spatially and temporally incoherent light in a single exposure. They employed a deep-learning-based denoising algorithm to suppress random noise in the image-reconstruction procedure, and succeeded in video-rate full-color digital holographic motion-picture imaging using a white LED.

The camera they developed is palm-sized, weighs less than 1 kg, operates on a table with , does not require antivibration structures, and obtains incoherent motion-picture holograms under the condition of close-up recording.

The research is published in the journal Advanced Devices & Instrumentation.

“Just like tuning forks of different material will have different pure tones, remnants described by different equations of state will ring down at different frequencies,” Rezzolla said in a statement. “The detection of this signal thus has the potential to reveal what neutron stars are made of.”

Gravitational waves were first suggested by Albert Einstein in this 1915 theory of gravity, known as general relativity.

Researchers have created a new AI algorithm called Torque Clustering, which greatly enhances an AI system’s ability to learn and identify patterns in data on its own, without human input.

Researchers have developed a new AI algorithm, Torque Clustering, which more closely mimics natural intelligence than existing methods. This advanced approach enhances AI’s ability to learn and identify patterns in data independently, without human intervention.

Torque Clustering is designed to efficiently analyze large datasets across various fields, including biology, chemistry, astronomy, psychology, finance, and medicine. By uncovering hidden patterns, it can provide valuable insights, such as detecting disease trends, identifying fraudulent activities, and understanding human behavior.

To test this new system, the team executed what is known as Grover’s search algorithm—first described by Indian-American computer scientist Lov Grover in 1996. This search looks for a particular item in a large, unstructured dataset using superposition and entanglement in parallel. The search algorithm also exhibits a quadratic speedup, meaning a quantum computer can solve a problem with the square root of the input rather than just a linear increase. The authors report that the system achieved a 71 percent success rate.

While operating a successful distributed system is a big step forward for quantum computing, the team reiterates that the engineering challenges remain daunting. However, networking together quantum processors into a distributed network using quantum teleportation provides a small glimmer of light at the end of a long, dark quantum computing development tunnel.

“Scaling up quantum computers remains a formidable technical challenge that will likely require new physics insights as well as intensive engineering effort over the coming years,” David Lucas, principal investigator of the study from Oxford University, said in a press statement. “Our experiment demonstrates that network-distributed quantum information processing is feasible with current technology.”