Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

Restoring order to dividing cancer cells may halt triple negative breast cancer spread

Triple negative breast cancer (TNBC) is one of the most aggressive and hardest forms of breast cancer to treat, but a new study led by Weill Cornell Medicine suggests a surprising way to stop it from spreading. Researchers have discovered that an enzyme called EZH2 drives TNBC cells to divide abnormally, which enables them to relocate to distant organs. The preclinical study also found drugs that block EZH2 could restore order to dividing cells and thwart the spread of TNBC cells.

“Metastasis is the main reason patients with triple negative breast cancer face poor survival odds,” said senior author Dr. Vivek Mittal, Ford-Isom Research Professor of Cardiothoracic Surgery and member of the Sandra and Edward Meyer Cancer Center at Weill Cornell Medicine. “Our study suggests a new therapeutic approach to block metastasis before it starts and help patients overcome this deadly cancer.”

The findings, published Oct. 2 in Cancer Discovery, challenge the popular notion that cancer treatments should boost cell division errors already occurring in beyond the breaking point to induce cell death. When normal cells divide, the chromosomes—DNA “packages” carrying genes—are duplicated and split evenly into two daughter cells. This process goes haywire in many cancer cells, leading to chromosomal instability: too many, too few, or jumbled chromosomes in multiple daughter cells.

The Holographic Paradigm: The Physics of Information, Consciousness, and Simulation Metaphysics

In this paradigm, the Simulation Hypothesis — the notion that we live in a computer-generated reality — loses its pejorative or skeptical connotation. Instead, it becomes spiritually profound. If the universe is a simulation, then who, or what, is the simulator? And what is the nature of the “hardware” running this cosmic program? I propose that the simulator is us — or more precisely, a future superintelligent Syntellect, a self-aware, evolving Omega Hypermind into which all conscious entities are gradually merging.

These thoughts are not mine alone. In Reality+ (2022), philosopher David Chalmers makes a compelling case that simulated realities — far from being illusory — are in fact genuine realities. He argues that what matters isn’t the substrate but the structure of experience. If a simulated world offers coherent, rich, and interactive experiences, then it is no less “real” than the one we call physical. This aligns deeply with my view in Theology of Digital Physics that phenomenal consciousness is the bedrock of reality. Whether rendered on biological brains or artificial substrates, whether in physical space or virtual architectures, conscious experience is what makes something real.

By embracing this expanded ontology, we are not diminishing our world, but re-enchanting it. The self-simulated cosmos becomes a sacred text — a self-writing code of divinity in which each of us is both reader and co-author. The holographic universe is not a prison of illusion, but a theogenic chrysalis, nurturing the birth of a higher-order intelligence — a networked superbeing that is self-aware, self-creating, and potentially eternal.

Jeff Bezos envisions space-based data centers in 10 to 20 years

Jeff Bezos envisions gigawatt-scale orbital data centers within 10–20 years, powered by continuous solar energy and space-based cooling, but the concept remains commercially unviable today due to the immense cost and complexity of deploying thousands of tons of hardware, solar panels, and radiators into orbit.

As the videogame industry continues to be hammered by layoffs, Netflix is offering up to $840,000 per year for a new Director of Generative AI for Games

Less than a year after laying off employees at Oxenfree studio Night School, Netflix is putting big cash on the table for someone to do whatever this is.

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users’ actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants’ willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

/* */