Android Intrusion Logging stores encrypted forensic logs for 12 months, helping experts investigate spyware attacks on high-risk users.
A critical vulnerability affecting certain configurations of the Exim open-source mail transfer agent could be exploited by an unauthenticated remote attacker to execute arbitrary code.
Identified as CVE-2026–45185, the security issue impacts some Exim versions before 4.99.3 that use the default GNU Transport Layer Security (GnuTLS) library for secure communication. It is a user-after-free (UAF) flaw triggered during the TLS shutdown while handling BDAT chunked SMTP traffic.
Exim frees a TLS transfer buffer but later continues using stale callback references that can write data into the freed memory region, which can lead to unauthenticated remote code execution (RCE).
A cybersecurity researcher has published proof-of-concept (PoC) exploits for two unpatched Microsoft Windows vulnerabilities named YellowKey and GreenPlasma, which are a BitLocker bypass and a privilege-escalation flaw.
Known as Chaotic Eclipse or Nightmare Eclipse, the researcher describes the BitLocker bypass issue as functioning like a backdoor because the vulnerable component is present only in the Windows Recovery Environment (WinRE), which is used to repair boot-related issues in Windows.
The latest exploits follow the researcher’s previous disclosure of the BlueHammer (CVE-2026–33825) and RedSun (no identifier) local privilege escalation (LPE) as zero-day flaws, both of which began to be exploited in the wild shortly after being publicly disclosed.
To learn more, please visit the YouTube Help Center: https://www.youtube.com/help
Immunotherapy targeting programmed cell death protein 1 (PD-1) and programmed death ligand 1 (PD-L1) has transformed the management of several types of cancers, including non-oncogene-addicted non-small cell lung cancer (NSCLC) [1], although its efficacy remains limited by resistance mechanisms and constraints inherent to monoclonal antibodies [1]. To overcome these drawbacks, small-molecule PD-L1 inhibitors have been developed, and we previously contributed by identifying the nanomolar triazine-based ligand Tr-10 [2]. In parallel, combinatorial strategies aimed at improving the efficacy of anti-PD-1/PD-L1 immunotherapy have gained increasing attention. Notably, platinum-based chemotherapy combined with immune checkpoint inhibitors is recommended as a first-line treatment for advanced NSCLC with PD-L1 expression <50% [3]. Here, we investigated a novel combination involving our anti-PD-L1, Tr-10 [2], and a bis(phenyl-pyridine)iridium(III) complex, Ir-2 (Fig. 1A) [4]. Iridium (Ir) complexes, unlike platinum drugs, are chemically inert and induce endoplasmic reticulum (ER) stress and overproduction of reactive oxygen species (ROS) [5,6], both culminating in damage-associated molecular pattern (DAMP) release and immunogenic cell death (ICD). Moreover, their photophysical properties enable PD-L1-targeted bioimaging when coupled with PD-L1 ligands (Fig. S1) [7].
We are surrounded by computer-generated voices these days, from navigation systems and voice assistants to automated announcements. But how human do these voices actually sound? A recent study by the Max Planck Institute for Empirical Aesthetics (MPIEA) in Frankfurt am Main, Germany, published in the journal Speech Communication, shows that our perception is affected by three things: how something is said, what is being said, and whether we understand the language.
In two consecutive experiments, the researchers investigated how people perceive the difference between real and synthetic voices. They created 16 short German sentences, such as: “The boy gave his father a hat.” The team then manipulated the sentences in three different ways by changing the word order, replacing words with similar-sounding pseudowords, and combining both changes. This resulted in four versions of each sentence. All versions were recorded by eight human speakers and eight computer-generated text-to-speech (TTS) voices.
In the first experiment, 40 German-speaking participants rated how human the voices sounded. Overall, the computer-generated voices were perceived as less human than the human voices. An analysis of the voices’ acoustic characteristics revealed objectively measurable differences in sound between human and TTS-generated voices.