The world-first facility for permanently disposing spent nuclear fuel is set to begin operations in Finland after decades of construction.
David J. Silvester, a mathematics professor at the University of Manchester, has developed a novel machine-learning method to detect sudden changes in fluid behavior, improving speed and the cost of identifying these instabilities and overcoming one of the major obstacles faced when using machine learning to simulate physical systems. The findings are published in the Journal of Computational Physics.
Computational simulations of mathematical models of fluid flow are essential for everyday applications ranging from predicting the weather to the assessment of nuclear reactor safety. The advent of this simulation capability over the past 50 years has revolutionized the development of fuel-efficient airplanes, and sail configurations on racing yachts can now be optimized in real time, providing the marginal gains needed to win races in the America’s Cup.
Optimized aerodynamics means that modern day cyclists can ride faster, golf balls fly further and Olympic swimmers consistently set world records. Computational fluid dynamics also enables the modeling of the flow of blood in the human heart, making the provision of patient-specific surgery possible.
When fundamental particles are heavier or lighter than expected, physicists’ understanding of the universe can tip into the unknown. A particle that is just beyond its predicted mass can unravel scientists’ assumptions about the forces that make up all of matter and space. But now, a new precision measurement has reset the balance and confirmed scientists’ theories, at least for one of the universe’s core building blocks.
In a paper appearing in the journal Nature, an international team including MIT physicists reports a new, ultraprecise measurement of the mass of the W boson.
The W boson is one of two elementary particles that embody the weak force, which is one of the four fundamental forces of nature. The weak force enables certain particles to change identities, such as from protons to neutrons and vice versa. This morphing is what drives radioactive decay, as well as nuclear fusion, which powers the sun.
“with Brian Berzin — Co-Founder & CEO of Thea Energy.
What if we could build a fusion reactor that runs continuously—without the instability issues that have plagued the field for years?
Brian Berzin is the Co-Founder and CEO of Thea Energy (https://thea.energy/), a next-generation fusion company focused on advancing stellarator technology—one of the most promising but historically underexplored approaches to magnetic confinement fusion.
Brian brings a unique combination of deep technical and financial expertise, with a background spanning electrical engineering, venture capital, private equity, and investment banking.
Prior to founding Thea Energy, Brian served as Vice President of Strategy at General Fusion, where he helped shape commercialization strategy and led engagement with global capital markets during a pivotal period for privately funded fusion.
Particle accelerators reveal the heart of nuclear matter by smashing together atoms at close to the speed of light. The high-energy collisions produce a shower of subatomic fragments that scientists can then study to reconstruct the core building blocks of matter.
An MIT-led team has now used the world’s most powerful particle accelerator to discover new properties of matter, through particles’ “near-misses.” The approach has turned the particle accelerator into a new kind of microscope—and led to the discovery of new behavior in the forces that hold matter together.
In a study appearing this week in the journal Physical Review Letters, the team reports results from the Large Hadron Collider (LHC)—a massive underground, ring-shaped accelerator in Geneva, Switzerland. Rather than focus on the accelerator’s particle collisions, the MIT team searched for instances when particles barely glanced by each other.
Scientists in Geneva took some antiprotons out for a spin—a very delicate one—in a truck, in a never-tried-before test drive that has been deemed a success.
If this so-called antimatter had come into contact with actual matter, even for a fraction of an instant, it would have been annihilated in a quick flash of energy. So experts at the European Organization for Nuclear Research, known as CERN, had to be extra careful when they took 92 antiprotons on the road for a short ride on Tuesday.
The antiprotons were suspended in a vacuum inside a specially designed box and held in place by supercooled magnets.
https://doi.org/10.1172/JCI199841 As part of the JCI’s Review Series on Neurodegeneration, Olivia Gautier, Thao P. Nguyen & Aaron D. Gitler explore the molecular basis for selective neuronal vulnerability and degeneration and summarize recent advances and applications of single-cell genomic approaches.
How do we decide whether we should choose single-cell or single-nucleus sequencing? This depends on sample types and biological applications. Single-cell sequencing is typically applied to fresh, readily dissociable tissues or cultured cells to study intact cell populations. Because it captures both cytoplasmic and nuclear transcripts, scRNA-seq provides a comprehensive view of cellular gene expression. However, tissue dissociation can induce stress-related transcriptional artifacts and introduce substantial cell-type bias. Large or fragile neurons are often lost during dissociation, whereas smaller cell types, such as astrocytes and oligodendrocytes, tend to be overrepresented. In contrast, single-nucleus sequencing is commonly used for frozen samples or for tissues that are difficult to dissociate, including the brain and spinal cord. Although fresh or fresh-frozen samples are typically used, snRNA-seq is compatible with formalin-fixed, paraffin-embedded (FFPE) samples, enabling the analysis of archived human specimens. A key limitation is that snRNA-seq does not capture cytoplasmic transcripts and is therefore biased toward nuclear, often premature, mRNA species.
Spatial transcriptomics does not require tissue dissociation and enables examination of cellular transcriptomes within their native tissue niches. Some spatial transcriptomic technologies are now compatible with FFPE samples, allowing analyses of preserved clinical specimens along with fixed-frozen and fresh-frozen samples. These technologies can be broadly classified into two main categories: imaging-based and sequencing-based (Figure 2B). Imaging-based approaches, like multiplexed error-robust fluorescence in situ hybridization (MERFISH), spatially resolved transcript amplicon readout mapping (STARmap), and 10x Genomics Xenium, rely on probe hybridization and multiplexed imaging to detect and visualize transcripts at high spatial resolution, often achieving single-cell or even subcellular resolution (17, 18). Although whole-transcriptome measurements are possible, MERFISH typically targets predefined gene panels due to the constraints of iterative hybridization and imaging. In contrast, sequencing-based approaches, including NanoString GeoMx and 10x Genomics Visium, capture RNA on spatially barcoded tissue slides or nanobeads followed by next-generation sequencing. These methods generally recover a broader range of transcripts than imaging-based approaches but, in most cases, do not yet achieve true single-cell resolution. Instead, they measure gene expression within spatial “spots” that encompass multiple cells and therefore rely on computational deconvolution to infer cell-type composition. Newer spatial transcriptomic methods, like spatial enhanced resolution omics sequencing (Stereo-seq) and reverse-padlock amplicon-encoding fluorescence in situ hybridization (RAEFISH), are approaching single-cell and single-molecule resolution (19 – 21).
In this Review, we summarize recent advances and applications of single-cell genomics approaches to study neurodegenerative disorders, including Alzheimer disease (AD), Parkinson disease (PD), amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), and Huntington disease (HD). We focus on how these approaches provide insight into the unique vulnerabilities of specific neuronal populations, define novel disease-associated cellular states, and reveal contributions of non-neuronal cells to disease pathogenesis. We then look to the future, envisioning how these technologies will empower genetic screens to uncover modifiers of neurodegeneration and new therapeutic targets.
Nuclear isomers are crucial probes for studying the structure of nuclei. Unlike chemical isomers—which have the same chemical formula but different arrangements of atoms—nuclear isomers are nuclei that exist in a long-lived and relatively stable excited state.
Normally, an atomic nucleus resides in its lowest-energy state, known as the ground state. Under external perturbations, such as nucleus-nucleus collisions, however, a nucleus can be excited to a higher-energy state.
While most excited nuclear states are extremely short-lived and rapidly decay back to the ground state, some nuclei remain “trapped” in an excited state for a remarkably long time. Such isomeric states help reveal the structure of the nucleus due to its high sensitivity to the underlying shell structure as well as to changes in single-particle levels.
At Argonne National Laboratory, researchers are trading in old-school approximations for raw supercomputing power, proving that the secret to a safer carbon-free future lies in mastering the math of chaos.
Researchers are advancing nuclear safety by using high-performance computing to model turbulent flow — the chaotic movement of fluids and gases that governs heat transfer and gas mixing within a reactor.