Toggle light / dark theme

Large Language Models (LLMs) have rapidly become an integral part of our digital landscape, powering everything from chatbots to code generators. However, as these AI systems increasingly rely on proprietary, cloud-hosted models, concerns over user privacy and data security have escalated. How can we harness the power of AI without exposing sensitive data?

A recent study, “Entropy-Guided Attention for Private LLMs,” by Nandan Kumar Jha, a Ph.D. candidate at the NYU Center for Cybersecurity (CCS), and Brandon Reagen, Assistant Professor in the Department of Electrical and Computer Engineering and a member of CCS, introduces a novel approach to making AI more secure.

The paper was presented at the AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI 25) in early March and is available on the arXiv preprint server.

More than seven years ago, cybersecurity researchers were thoroughly rattled by the discovery of Meltdown and Spectre, two major security vulnerabilities uncovered in the microprocessors found in virtually every computer on the planet.

Perhaps the scariest thing about these vulnerabilities is that they didn’t stem from typical software bugs or physical CPU problems, but from the actual processor architecture. These attacks changed our understanding of what can be trusted in a system, forcing to fundamentally reexamine where they put resources.

These attacks emerged from an optimization technique called “speculative execution” that essentially gives the processor the ability to execute multiple instructions while it waits for memory, before discarding the instructions that aren’t needed.

Cybersecurity researchers have shed light on a new phishing-as-a-service (PhaaS) platform that leverages the Domain Name System (DNS) mail exchange (MX) records to serve fake login pages that impersonate about 114 brands.

DNS intelligence firm Infoblox is tracking the actor behind the PhaaS, the phishing kit, and the related activity under the moniker Morphing Meerkat.

The legacy domain for Microsoft Stream was hijacked to show a fake Amazon site promoting a Thailand casino, causing all SharePoint sites with old embedded videos to display it as spam.

Microsoft Stream is an enterprise video streaming service that allows organizations to upload and share videos in Microsoft 365 apps, such as Teams and SharePoint.

Video content hosted on Microsoft Stream was accessed or embedded through a portal at microsoftstream.com.

The threat actor known as EncryptHub exploited a recently-patched security vulnerability in Microsoft Windows as a zero-day to deliver a wide range of malware families, including backdoors and information stealers such as Rhadamanthys and StealC.

“In this attack, the threat actor manipulates.msc files and the Multilingual User Interface Path (MUIPath) to download and execute malicious payload, maintain persistence and steal sensitive data from infected systems,” Trend Micro researcher Aliakbar Zahravi said in an analysis.

The vulnerability in question is CVE-2025–26633 (CVSS score: 7.0), described by Microsoft as an improper neutralization vulnerability in Microsoft Management Console (MMC) that could allow an attacker to bypass a security feature locally. It was fixed by the company earlier this month as part of its Patch Tuesday update.

Such credentials could be obtained from a data breach of a social media service or be acquired from underground forums where they are advertised for sale by other threat actors.

Credential stuffing is also different from brute-force attacks, which revolve around cracking passwords, login credentials, and encryption keys using a trial and error method.

Atlantis AIO, per Abnormal Security, offers threat actors the ability to launch credential stuffing attacks at scale via pre-configured modules for targeting a range of platforms and cloud-based services, thereby facilitating fraud, data theft, and account takeovers.