Toggle light / dark theme

The Hidden Material Breakthrough That Could Supercharge AI and Save Energy

Researchers have discovered that incipient ferroelectricity can revolutionize computer memory, enabling ultra-low power devices.

These unique transistors shift behavior based on temperature, making them suitable for both traditional memory and neuromorphic computing, which mimics the brain’s energy efficiency. The use of strontium titanate thin films reveals unexpected ferroelectric-like properties, hinting at new possibilities in advanced electronics.

Harnessing gravity to create a low-cost microfluidic device for rapid cell analysis

A team of researchers at the George R. Brown School of Engineering and Computing at Rice University has developed an innovative artificial intelligence (AI)-enabled, low-cost device that will make flow cytometry—a technique used to analyze cells or particles in a fluid using a laser beam—affordable and accessible.

The prototype identifies and counts cells from unpurified blood samples with similar accuracy as the more expensive and bulky conventional flow cytometers, provides results within minutes and is significantly cheaper and compact, making it highly attractive for point-of-care clinical applications, particularly in low-resource and rural areas.

Peter Lillehoj, the Leonard and Mary Elizabeth Shankle Associate Professor of Bioengineering, and Kevin McHugh, assistant professor of bioengineering and chemistry, led the development of this new device. The study was published in Microsystems & Nanoengineering.

Deep learning model boosts plasma predictions in nuclear fusion by 1,000 times

A research team, led by Professor Jimin Lee and Professor Eisung Yoon in the Department of Nuclear Engineering at UNIST, has unveiled a deep learning–based approach that significantly accelerates the computation of a nonlinear Fokker–Planck–Landau (FPL) collision operator for fusion plasma.

The findings are published in the Journal of Computational Physics.

Nuclear fusion reactors, often referred to as artificial sun, rely on maintaining a high-temperature plasma environment similar to that of the sun. In this state, matter is composed of negatively charged electrons and positively charged ions. Accurately predicting the collisions between these particles is crucial for sustaining a stable fusion reaction.

From automation to analysis, AI-driven innovations are making synchrotron science faster, smarter, more efficient

The National Synchrotron Light Source II (NSLS-II)—a U.S. Department of Energy (DOE) Office of Science user facility at DOE’s Brookhaven National Laboratory—is among the world’s most advanced synchrotron light sources, enabling and supporting science across various disciplines. Advances in automation, robotics, artificial intelligence (AI), and machine learning (ML) are transforming how research is done at NSLS-II, streamlining workflows, enhancing productivity, and alleviating workloads for both users and staff.

As synchrotron facilities rapidly advance—providing brighter beams, automation, and robotics to accelerate experiments and discovery—the quantity, quality, and speed of data generated during an experiment continues to increase. Visualizing, analyzing, and sorting these large volumes of data can require an impractical, if not impossible, amount of time and attention.

Presenting scientists with is as important as preparing samples for beam time, optimizing the experiment, performing error detection, and remedying anything that may go awry during a measurement.

12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training

A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication.

The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to their users.

Truffle Security said it downloaded a December 2024 archive from Common Crawl, which maintains a free, open repository of web crawl data. The massive dataset contains over 250 billion pages spanning 18 years.

Vo1d malware botnet grows to 1.6 million Android TVs worldwide

A new variant of the Vo1d malware botnet has grown to 1,590,299 infected Android TV devices across 226 countries, recruiting devices as part of anonymous proxy server networks.

This is according to an investigation by Xlab, which has been tracking the new campaign since last November, reporting that the botnet peaked on January 14, 2025, and currently has 800,000 active bots.

In September 2024, Dr. Web antivirus researchers found 1.3 million devices across 200 countries compromised by Vo1d malware via an unknown infection vector.

Microsoft names cybercriminals behind AI deepfake network

Microsoft has named multiple threat actors part of a cybercrime gang accused of developing malicious tools capable of bypassing generative AI guardrails to generate celebrity deepfakes and other illicit content.

An updated complaint identifies the individuals as Arian Yadegarnia from Iran (aka ‘Fiz’), Alan Krysiak of the United Kingdom (aka ‘Drago’), Ricky Yuen from Hong Kong, China (aka ‘cg-dot’), and Phát Phùng Tấn of Vietnam (aka ‘Asakuri’).

As the company explained today, these threat actors are key members of a global cybercrime gang that it tracks as Storm-2139.

A vectorial code for semantics in human hippocampus

As we listen to speech, our brains actively compute the meaning of individual words. Inspired by the success of large language models (LLMs), we hypothesized that the brain employs vectorial coding principles, such that meaning is reflected in distributed activity of single neurons. We recorded responses of hundreds of neurons in the human hippocampus, which has a well-established role in semantic coding, while participants listened to narrative speech. We find encoding of contextual word meaning in the simultaneous activity of neurons whose individual selectivities span multiple unrelated semantic categories. Like embedding vectors in semantic models, distance between neural population responses correlates with semantic distance; however, this effect was only observed in contextual embedding models (like BERT) and was reversed in non-contextual embedding models (like Word2Vec), suggesting that the semantic distance effect depends critically on contextualization. Moreover, for the subset of highly semantically similar words, even contextual embedders showed an inverse correlation between semantic and neural distances; we attribute this pattern to the noise-mitigating benefits of contrastive coding. Finally, in further support for the critical role of context, we find that range of neural responses covaries with lexical polysemy. Ultimately, these results support the hypothesis that semantic coding in the hippocampus follows vectorial principles.

The authors have declared no competing interest.

/* */