Toggle light / dark theme

BatShadow Group Uses New Go-Based ‘Vampire Bot’ Malware to Hunt Job Seekers

In October 2024, Cyble also disclosed details of a sophisticated multi-stage attack campaign orchestrated by a Vietnamese threat actor that targeted job seekers and digital marketing professionals with Quasar RAT using phishing emails containing booby-trapped job description files.

BatShadow is assessed to be active for at least a year, with prior campaigns using similar domains, such as samsung-work[.]com, to propagate malware families including Agent Tesla, Lumma Stealer, and Venom RAT.

“The BatShadow threat group continues to employ sophisticated social engineering tactics to target job seekers and digital marketing professionals,” Aryaka said. “By leveraging disguised documents and a multi-stage infection chain, the group delivers a Go-based Vampire Bot capable of system surveillance, data exfiltration, and remote task execution.”

Google’s New AI Doesn’t Just Find Vulnerabilities — It Rewrites Code to Patch Them

Google’s DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits.

The efforts add to the company’s ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz.

DeepMind said the AI agent is designed to be both reactive and proactive, by fixing new vulnerabilities as soon as they are spotted as well as rewriting and securing existing codebases with an aim to eliminate whole classes of vulnerabilities in the process.

Google won’t fix new ASCII smuggling attack in Gemini

Google has decided not to fix a new ASCII smuggling attack in Gemini that could be used to trick the AI assistant into providing users with fake information, alter the model’s behavior, and silently poison its data.

ASCII smuggling is an attack where special characters from the Tags Unicode block are used to introduce payloads that are invisible to users but can still be detected and processed by large-language models (LLMs).

It’s similar to other attacks that researchers presented recently against Google Gemini, which all exploit a gap between what users see and what machines read, like performing CSS manipulation or exploiting GUI limitations.

AI-radar system tracks subtle health changes by assessing patient’s walk

Engineering and health researchers at the University of Waterloo have developed a radar and artificial intelligence (AI) system that can monitor multiple people walking in busy hospitals and long-term care facilities to identify possible health issues.

The new technology—housed in a wall-mounted device about the size of a deck of cards—uses AI software and radar hardware to accurately measure how fast each person is walking. A paper on their work, “Non-contact, non-visual, multi-person hallway gait monitoring,” appears in Scientific Reports.

“Walking speed is often called a functional vital sign because even subtle declines can be an early warning of health problems,” said Dr. Hajar Abedi, a former postdoctoral researcher in electrical and computer engineering at Waterloo.

Smart blood: How AI reads your body’s aging signals

Could a simple blood test reveal how well someone is aging? A team of researchers led by Wolfram Weckwerth from the University of Vienna, Austria, and Nankai University, China, has combined advanced metabolomics with cutting-edge machine learning and a novel network modeling tool to uncover the key molecular processes underlying active aging.

Their study, published in npj Systems Biology and Applications, identifies aspartate as a dominant biomarker of physical fitness and maps the dynamic interactions that support healthier aging.

It has long been known that exercise protects mobility and lowers the risk of chronic disease. Yet the precise molecular processes that translate physical activity into healthier aging remain poorly understood. The researchers set out to answer a simple but powerful question: Can we see the benefits of an active lifestyle in elderly individuals directly in the blood—and pinpoint the molecules that matter most?

Scientists create ChatGPT-like AI model for neuroscience to build one of the most detailed mouse brain maps to date

In a powerful fusion of AI and neuroscience, researchers at the University of California, San Francisco (UCSF) and Allen Institute designed an AI model that has created one of the most detailed maps of the mouse brain to date, featuring 1,300 regions/subregions.

This new map includes previously uncharted subregions of the brain, opening new avenues for neuroscience exploration. The findings were published in Nature Communications. They offer an unprecedented level of detail and advance our understanding of the brain by allowing researchers to link specific functions, behaviors, and disease states to smaller, more precise cellular regions—providing a roadmap for new hypotheses and experiments about the roles these areas play.

“It’s like going from a map showing only continents and countries to one showing states and cities,” said Bosiljka Tasic, Ph.D., director of molecular genetics at the Allen Institute and one of the study authors.

Brownstone Research

Super AI is coming soon.


This is your shot to “partner” with Elon Musk in Project Colossus, the supercomputer that Jeff believes will power the next generation of AI.

Jeff is about to show you how you could take a stake in Elon’s private company starting with as little as $500…

Without having connections in Silicon Valley… Without having to be an accredited investor… And without having to be rich.

How One AI Model Creates a Physical Intuition of Its Environment

Once this pretraining stage is complete, the next step is to tailor V-JEPA to accomplish specific tasks such as classifying images or identifying actions depicted in videos. This adaptation phase requires some human-labeled data. For example, videos have to be tagged with information about the actions contained in them. The adaptation for the final tasks requires much less labeled data than if the whole system had been trained end to end for specific downstream tasks. In addition, the same encoder and predictor networks can be adapted for different tasks.

Intuition Mimic

In February, the V-JEPA team reported how their systems did at understanding the intuitive physical properties of the real world — properties such as object permanence, the constancy of shape and color, and the effects of gravity and collisions. On a test called IntPhys, which requires AI models to identify if the actions happening in a video are physically plausible or implausible, V-JEPA was nearly 98% accurate. A well-known model that predicts in pixel space was only a little better than chance.

/* */