Standard testing can miss some tumors, which are later diagnosed and called interval cancers. They’re often more aggressive than screening-detected disease
Scientists mapped every possible mutation in a key genetic hotspot, revealing how distinct mutations drive tumor growth differently, which could influence anticancer therapy success.
Read more.
Mapping diverse mutations within a cancer hotspot revealed that distinct variants drive tumor growth to different extents, which could guide anticancer therapies.
A blood test, combined with an ultrathin material derived from graphite, could significantly advance efforts to detect Alzheimer’s disease at its very earliest stage, even before symptoms appear.
Alzheimer’s disease is the most common form of dementia. For millions of Europeans—and the health services that care for them—it is a ticking time bomb, with still no cure. But EU researchers are developing a simple tool to enable much earlier detection, potentially decades before symptoms appear.
Early detection matters because treatment is most effective when started as soon as possible. This gives people a better chance to slow the progression of the disease and plan for the future. Today, around 7 million people in Europe live with Alzheimer’s, a number expected to double by 2030, according to the European Brain Council.
In a Nature Communications study, researchers from China have developed an error-aware probabilistic update (EaPU) method that aligns memristor hardware’s noisy updates with neural network training, slashing energy use by nearly six orders of magnitude versus GPUs while boosting accuracy on vision tasks. The study validates EaPU on 180 nm memristor arrays and large-scale simulations.
Analog in-memory computing with memristors promises to overcome digital chips’ energy bottlenecks by performing matrix operations via physical laws. Memristors are devices that combine memory and processing like brain synapses.
Inference on these systems works well, as shown by IBM and Stanford chips. But training deep neural networks hits a snag: “writing” errors when setting memristor weights.
Many advanced electronic devices – such as OLEDs, batteries, solar cells, and transistors – rely on complex multilayer architectures composed of multiple materials. Optimizing device performance, stability, and efficiency requires precise control over layer composition and arrangement, yet experimental exploration of new designs is costly and time-intensive. Although physics-based simulations offer insight into individual materials, they are often impractical for full device architectures due to computational expense and methodological limitations.
Schrödinger has developed a machine learning (ML) framework that enables users to predict key performance metrics of multilayered electronic devices from simple, intuitive descriptions of their architecture and operating conditions. This approach integrates automated ML workflows with physics-based simulations in the Schrödinger Materials Science suite, leveraging physics-based simulation outputs to improve model accuracy and predictive power. This advancement provides a scalable solution for rapidly exploring novel device design spaces – enabling targeted evaluations such as modifying layer composition, adding or removing layers, and adjusting layer dimensions or morphology. Users can efficiently predict device performance and uncover interpretable relationships between functionality, layer architecture, and materials chemistry. While this webinar focuses on single-unit and tandem OLEDs, the approach is readily adaptable to a wide range of electronic devices.
Here, Carlos L. Arteaga & team analyze patient biopsies, finding CD8+ T cells in the TIME promote resistance to estrogen suppression in HR+ breast cancer via CXCL11 and immune-related pathways:
The images: GeoMx-based immunofluorescence of breast tumor tissue obtained during estrogen deprivation therapy (letrozole) demonstrates increased immune cell infiltration in estrogen deprivation–resistant tumors (right) compared with sensitive tumors (left).
1UT Southwestern Simmons Comprehensive Cancer Center, Department of Internal Medicine, University of Texas Southwestern (UTSW) Medical Center, Dallas, Texas, USA.
2Department of Clinical Medicine and Surgery, University of Naples Federico II, Naples, Italy.
3Division of Pediatric Gastroenterology, Hepatology and Nutrition, Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio, USA.
Here, Miguel Verbitsky & team analyze urine from 325 participants in the Randomized Intervention for Children with Vesicoureteral Reflux study (RIVUR study), revealing genetic variations influence bacterial composition of urine in children with recurrent urinary infections and vesicoureteral reflux:
The image shows cytokeratin 5 and smooth muscle actin labeling after UTI in mouse bladder, which increases expression of Cxcl12 and Cxcr4.
3Department of Dermatology; and.
4Center for Precision Medicine and Genomics, Columbia University, New York, New York, USA.
5Department of Neurology, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, USA.
By Chuck Brooks
#artificialintelligence #tech #government #quantum #innovation #federal #ai
By Chuck Brooks, president of Brooks Consulting International
In 2026, government technological innovation has reached a key turning point. After years of modernization plans, pilot projects and progressive acceptance, government leaders are increasingly incorporating artificial intelligence and quantum technologies directly into mission-critical capabilities. These technologies are becoming essential infrastructure for economic competitiveness, national security and scientific advancement rather than merely scholarly curiosity.
We are seeing a deliberate change in the federal landscape from isolated testing to the planned implementation of emerging technology across the whole government. This evolution represents not only technology momentum but also policy leadership, public-private collaboration and expanded industrial capability.
When AI systems fail, will they fail by systematically pursuing the wrong goals, or by being a hot mess? We decompose the errors of frontier reasoning models into bias (systematic) and variance (incoherent) components and find that, as tasks get harder and reasoning gets longer, model failures become increasingly dominated by incoherence rather than systematic misalignment. This suggests that future AI failures may look more like industrial accidents than coherent pursuit of a goal we did not train them to pursue.
Reservoir computing (RC) is a machine learning paradigm that harnesses dynamical systems as computational resources. In its quantum extension—quantum reservoir computing (QRC)—these principles are applied to quantum systems, whose rich dynamics broadens the landscape of information processing. In classical RC, optimal performance is typically achieved at the “edge of chaos,’’ the boundary between order and chaos. Here, we identify its quantum many-body counterpart using the QRC implemented on the celebrated Sachdev-Ye-Kitaev model. Our analysis reveals substantial performance enhancements near two distinct characteristic “edges’‘: a temporal boundary defined by the Thouless time, beyond which system dynamics is described by random matrix theory, and a parametric boundary governing the transition from integrable to chaotic regimes.