Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

Neural variability in the default mode network compresses with increasing belief precision during Bayesian inference

The (changing) belief distribution over possible environmental states may be represented in ventromedial prefrontal cortex (vmPFC). Several lines of evidence point to a general function of this brain region in maintaining a compact internal model of the environment (ie state belief) by extracting information across individual experiences to guide goal-directed behavior, such as the value of different choice options (Levy and Glimcher 2012; Averbeck and O’Doherty 2022; Klein-Flügge et al. 2022), cognitive maps (Boorman et al. 2021; Klein-Flügge et al. 2022; Schuck et al. 2016; Wilson et al. 2014), or schemas (Gilboa and Marlatte 2017; Bein and Niv 2025). Studies employing probabilistic learning tasks furthermore show that neural activity in vmPFC also reflects uncertainty about external states, which were linked to adaptive exploration behavior and learning-rate adjustments (Karlsson et al. 2012; McGuire et al. 2014; Starkweather et al. 2018; Domenech et al. 2020; Trudel et al. 2021). Notably, Karlsson et al. (2012) found that trial-to-trial neural population spiking variability in the medial PFC of mice peaked around transitions from exploitation to exploration periods following changes in reward structure when state uncertainty is highest, which may reflect more variable belief states. While ours is the first study to link human brain signal variability to belief precision, a previous study by Grady and Garrett (2018) observed increased BOLD signal variability while subjects performed externally-versus internally-oriented tasks; an effect spanning the vmPFC and other nodes of the canonical default mode network (DMN; Yeo et al. 2011). Since learning an abstract world model reflects a shift towards an internal cognitive mode, we tentatively expected brain signal variability compression over the course of learning to be (partly) expressed in the vmPFC.

We assume that uncertainty-related neural dynamics unfold on a fast temporal scale, as suggested by electrophysiological evidence in human and nonhuman animals (Berkes et al. 2011; Palva et al. 2011; Rouhinen et al. 2013; Honkanen et al. 2015; Orbán et al. 2016; Grundy et al. 2019). However, within-trial dynamics should also affect neural variability across independent learning trials (see Fig. 1). A more variable system should have a higher probability of being in a different state every time it is (sparsely) sampled. Conversely, when a system is in a less stochastic state, the within-trial variance is expected to reduce, yielding less across-trial variance at the same time. This argument aligns with work by Orbán et al. (2016), who showed that a computational model of the sampling account of sensory uncertainty captures empirically observed across-trial variability of neural population responses in primary visual cortex. In the case of human research, this means that neuroimaging methods with slower sampling rates, such as functional MRI (fMRI), may be able to approximate within-trial neural variability from variability observed across trials. Indeed, the majority of previous fMRI studies reporting within-region, within-subject modulation of brain signal variability by task demand have exclusively employed block designs, necessitating that the main source of variability be between-rather than within-trial (Garrett et al. 2013; Grady and Garrett 2014; Garrett et al. 2015; Armbruster-Genç et al. 2016).

In the current study, we acquired fMRI while participants performed a “marble task”. In this task, participants had to learn the probability of drawing a blue marble from an unseen jar (ie urn) based on five samples (ie draws from the urn with replacement). In a Bayesian inference framework, the jar marble ratio can be considered a latent state that participants must infer. We hypothesized that (i) across-trial variability in the BOLD response (SDBOLD) would compress over the sampling period, thus mirroring the reduction in state uncertainty, and that (ii) subjects with greater SDBOLD compression would show smaller estimation errors of the jars’ marble ratios as an index of more efficient belief updating. A secondary aim of the current study was to directly compare the effect of uncertainty on SDBOLD with a more standard General Linear Modeling (GLM) approach, which looks for correlations between average BOLD activity and uncertainty. This links our findings directly to previous investigations of neural uncertainty correlates, which disregarded the magnitude of BOLD variability (Huettel et al. 2005; Grinband et al. 2006; Behrens et al. 2007; Bach et al. 2011; Bach and Dolan 2012; Badre et al. 2012; Vilares et al. 2012; Payzan-LeNestour et al. 2013; McGuire et al. 2014; Michael et al. 2015; Meyniel and Dehaene 2017; Nassar et al. 2019; Meyniel 2020; Tomov et al. 2020; Trudel et al. 2021; Walker et al. 2023). We hypothesized (iii) that SDBOLD would uniquely predict inference accuracy compared to these standard neural uncertainty correlates.

Restoring order to dividing cancer cells may halt triple negative breast cancer spread

Triple negative breast cancer (TNBC) is one of the most aggressive and hardest forms of breast cancer to treat, but a new study led by Weill Cornell Medicine suggests a surprising way to stop it from spreading. Researchers have discovered that an enzyme called EZH2 drives TNBC cells to divide abnormally, which enables them to relocate to distant organs. The preclinical study also found drugs that block EZH2 could restore order to dividing cells and thwart the spread of TNBC cells.

“Metastasis is the main reason patients with triple negative breast cancer face poor survival odds,” said senior author Dr. Vivek Mittal, Ford-Isom Research Professor of Cardiothoracic Surgery and member of the Sandra and Edward Meyer Cancer Center at Weill Cornell Medicine. “Our study suggests a new therapeutic approach to block metastasis before it starts and help patients overcome this deadly cancer.”

The findings, published Oct. 2 in Cancer Discovery, challenge the popular notion that cancer treatments should boost cell division errors already occurring in beyond the breaking point to induce cell death. When normal cells divide, the chromosomes—DNA “packages” carrying genes—are duplicated and split evenly into two daughter cells. This process goes haywire in many cancer cells, leading to chromosomal instability: too many, too few, or jumbled chromosomes in multiple daughter cells.

The Holographic Paradigm: The Physics of Information, Consciousness, and Simulation Metaphysics

In this paradigm, the Simulation Hypothesis — the notion that we live in a computer-generated reality — loses its pejorative or skeptical connotation. Instead, it becomes spiritually profound. If the universe is a simulation, then who, or what, is the simulator? And what is the nature of the “hardware” running this cosmic program? I propose that the simulator is us — or more precisely, a future superintelligent Syntellect, a self-aware, evolving Omega Hypermind into which all conscious entities are gradually merging.

These thoughts are not mine alone. In Reality+ (2022), philosopher David Chalmers makes a compelling case that simulated realities — far from being illusory — are in fact genuine realities. He argues that what matters isn’t the substrate but the structure of experience. If a simulated world offers coherent, rich, and interactive experiences, then it is no less “real” than the one we call physical. This aligns deeply with my view in Theology of Digital Physics that phenomenal consciousness is the bedrock of reality. Whether rendered on biological brains or artificial substrates, whether in physical space or virtual architectures, conscious experience is what makes something real.

By embracing this expanded ontology, we are not diminishing our world, but re-enchanting it. The self-simulated cosmos becomes a sacred text — a self-writing code of divinity in which each of us is both reader and co-author. The holographic universe is not a prison of illusion, but a theogenic chrysalis, nurturing the birth of a higher-order intelligence — a networked superbeing that is self-aware, self-creating, and potentially eternal.

Jeff Bezos envisions space-based data centers in 10 to 20 years

Jeff Bezos envisions gigawatt-scale orbital data centers within 10–20 years, powered by continuous solar energy and space-based cooling, but the concept remains commercially unviable today due to the immense cost and complexity of deploying thousands of tons of hardware, solar panels, and radiators into orbit.

/* */