ShadowSilk hit 36 victims across Central Asia and APAC in July, using Telegram bots to exfiltrate government data.

Threat researchers discovered the first AI-powered ransomware, called PromptLock, that uses Lua scripts to steal and encrypt data on Windows, macOS, and Linux systems.
The malware uses OpenAI’s gpt-oss:20b model through the Ollama API to dynamically generate the malicious Lua scripts from hard-coded prompts.
High frequency radio waves can wirelessly carry the vast amount of data demanded by emerging technology like virtual reality, but as engineers push into the upper reaches of the radio spectrum, they are hitting walls. Literally.
Ultrahigh frequency bandwidths are easily blocked by objects, so users can lose transmissions walking between rooms or even passing a bookcase.
Now, researchers at Princeton Engineering have developed a machine-learning system that could allow ultrahigh frequency transmissions to dodge those obstacles. In an article in Nature Communications, the researchers unveiled a system that shapes transmissions to avoid obstacles coupled with a neural network that can rapidly adjust to a complex and dynamic environment.
Batteries, like humans, require medicine to function at their best. In battery technology, this medicine comes in the form of electrolyte additives, which enhance performance by forming stable interfaces, lowering resistance and boosting energy capacity, resulting in improved efficiency and longevity.
Finding the right electrolyte additive for a battery is much like prescribing the right medicine. With hundreds of possibilities to consider, identifying the best additive for each battery is a challenge due to the vast number of possibilities and the time-consuming nature of traditional experimental methods.
Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are using machine learning models to analyze known electrolyte additives and predict combinations that could improve battery performance. They trained models to forecast key battery metrics, like resistance and energy capacity, and applied these models to suggest new additive combinations for testing.
Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model.
The method relies on full-resolution images that carry instructions invisible to the human eye but become apparent when the image quality is lowered through resampling algorithms.
Developed by Trail of Bits researchers Kikimora Morozova and Suha Sabi Hussain, the attack builds upon a theory presented in a 2020 USENIX paper by a German university (TU Braunschweig) exploring the possibility of an image-scaling attack in machine learning.