Critical WatchGuard Fireware flaw (CVE-2025–9242) allows unauthenticated remote code execution via IKEv2 VPN.

European law enforcement in an operation codenamed ‘SIMCARTEL’ has dismantled an illegal SIM-box service that enabled more than 3,200 fraud cases and caused at least 4.5 million euros in losses.
The cybercriminal online services had about 1,200 SIM-box devices with 40,000 SIM cards to provide phone numbers that were used in telecommunication crimes ranging from phishing and investment fraud to impersonation and extortion.
In an announcement today, Europol says that the cybercrime service operated through two websites, gogetsms.com and apisim.com, which have been seized and now display a law enforcement banner.
Reinforcement learning is terrible — but everything else is worse.
Karpathy’s sharpest takes yet on AGI, RL, and the future of learning.
Andrej Karpathy’s vision of AGI isn’t a bang — it’s a gradient descent through human history.
Karpathy on AGI & Superintelligence.
* AGI won’t be a sudden singularity — it will blend into centuries of steady progress (~2% GDP growth).
* Superintelligence is uncertain and likely gradual, not an instant “explosion.”
We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions. Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges’ $g
Prof Azim Surani, senior author of the paper, said: “Although it is still in the early stages, the ability to produce human blood cells in the lab marks a significant step towards future regenerative therapies — which use a patient’s own cells to repair and regenerate damaged tissues.”
Human blood stem cells, also known as hematopoietic stem cells, are immature cells that can develop into any type of blood cell.
These include red blood cells that carry oxygen and various types of white blood cells crucial to the immune system.
In a study published in Neuron, a research team at the Department of Neurology at Massachusetts General Hospital, aimed to understand how immune cells of the brain, called microglia, contribute to Alzheimer’s disease (AD) pathology. It’s known that subtle changes, or mutations, in genes expressed in microglia are associated with an increased risk for developing late-onset AD.
The study focused on one such mutation in the microglial gene TREM2, an essential switch that activates microglia to clean up toxic amyloid plaques (abnormal protein deposits) that build up between nerve cells in the brain. This mutation, called T96K, is a “gain-of-function” mutation in TREM2, meaning it increases TREM2 activation and allows the gene to remain super active.
The researchers explored how this mutation impacts microglial function to increase risk for AD. The team generated a mutant mouse model carrying the mutation, which was bred with a mouse model of AD to have brain changes consistent with AD. They found that in female AD mice exclusively, the mutation strongly reduced the capability of microglia to respond to toxic amyloid plaques, making these cells less protective against brain aging.
Why are we able to recall only some of our past experiences? A new study led by Jun Nagai at the RIKEN Center for Brain Science in Japan has an answer. Surprisingly, it turns out that the brain cells responsible for stabilizing memories aren’t neurons. Rather, they are astrocytes, a type of glial cell that is usually thought of as a role player in the game of learning and memory.
Published in Nature, the study shows how emotionally intense experiences like fear biologically tag small groups of astrocytes for several days so that they can re-engage when a mouse recalls the experience. It is this repeated astrocytic engagement that stabilizes memories.
Astrocytes have traditionally been thought to have a supporting role in the brain, literally. But when it became clear that engrams—the actual memory traces that exist in neurons—cannot alone account for stabilized, long-term memories, Nagai and his team turned to astrocytes for a solution.
Large language models (LLMs) can store and recall vast quantities of medical information, but their ability to process this information in rational ways remains variable. A new study led by investigators from Mass General Brigham demonstrated a vulnerability in that LLMs are designed to be sycophantic, or excessively helpful and agreeable, which leads them to overwhelmingly fail to appropriately challenge illogical medical queries despite possessing the information necessary to do so.
Findings, published in npj Digital Medicine, demonstrate that targeted training and fine-tuning can improve LLMs’ abilities to respond to illogical prompts accurately.
“As a community, we need to work on training both patients and clinicians to be safe users of LLMs, and a key part of that is going to be bringing to the surface the types of errors that these models make,” said corresponding author Danielle Bitterman, MD, a faculty member in the Artificial Intelligence in Medicine (AIM) Program and Clinical Lead for Data Science/AI at Mass General Brigham.