Toggle light / dark theme

Researchers are warning that hackers are actively exploiting a disputed vulnerability in a popular open-source AI framework known as Ray.

This tool is commonly used to develop and deploy large-scale Python applications, particularly for tasks like machine learning, scientific computing and data processing.

According to Ray’s developer, Anyscale, the framework is used by major tech companies such as Uber, Amazon and OpenAI.

Russian organizations are at the receiving end of cyber attacks that have been found to deliver a Windows version of a malware called Decoy Dog.

Cybersecurity company Positive Technologies is tracking the activity cluster under the name Operation Lahat, attributing it to an advanced persistent threat (APT) group called HellHounds.

“The Hellhounds group compromises organizations they select and gain a foothold on their networks, remaining undetected for years,” security researchers Aleksandr Grigorian and Stanislav Pyzhov said. “In doing so, the group leverages primary compromise vectors, from vulnerable web services to trusted relationships.”

A massive trove of 361 million email addresses from credentials stolen by password-stealing malware, in credential stuffing attacks, and from data breaches was added to the Have I Been Pwned data breach notification service, allowing anyone to check if their accounts have been compromised.

Cybersecurity researchers collected these credentials from numerous Telegram cybercrime channels, where the stolen data is commonly leaked to the channel’s users to build reputation and subscribers.

The stolen data is usually leaked as username and password combinations (usually stolen via credential stuffing attacks or data breaches), username and passwords along with a URL associated with them (stolen via password-stealing malware), and raw cookies (stolen via password-stealing malware).

This post is also available in: עברית (Hebrew)

AI technology is spreading quickly throughout many different industries, and its integration depends on users’ trust and safety concerns. This matter becomes complicated when the algorithms powering AI-based tools are vulnerable to cyberattacks that could have detrimental results.

Dr. David P. Woodruff from Carnegie Mellon University and Dr. Samson Zhou from Texas A&M University are working to strengthen the algorithms used by big data AI models against attacks.

Black hat hackers have reportedly unleashed malicious software targeting over 1,500 banks and their customers worldwide.

Security researchers at IBM say a revamped version of the Grandoreiro banking trojan has just rolled out, enabling attackers to perform banking fraud in 60 countries.

The malware allows attackers to send email notices that appear to be urgent government requests for payments.

V/ Sebastian Raschka.

For weekend reading:

Chapter 6 (Finetuning LLMs for Classification) of Build an LLM from Scratch book is now finally available on the Manning website:


  • Introducing different LLM finetuning approaches
  • Preparing a dataset for text classification
  • Modifying a pretrained LLM for finetuning
  • Finetuning an LLM to identify spam messages
  • Evaluating the accuracy of a finetuned LLM classifier
  • Using a finetuned LLM to classify new data

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Thursday added a security flaw impacting the Linux kernel to the Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation.

Tracked as CVE-2024–1086 (CVSS score: 7.8), the high-severity issue relates to a use-after-free bug in the netfilter component that permits a local attacker to elevate privileges from a regular user to root and possibly execute arbitrary code.

“Linux kernel contains a use-after-free vulnerability in the netfilter: nf_tables component that allows an attacker to achieve local privilege escalation,” CISA said.