Toggle light / dark theme

Topsicle: a method for estimating telomere length from whole genome long-read sequencing data

Long read sequencing technology (advanced by Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (Nanopore)) is revolutionizing the genomics field [43] and it has major potential to be a powerful computational tool for investigating the telomere length variation within populations and between species. Read length from long read sequencing platforms is orders of magnitude longer than short read sequencing platforms (tens of kilobase pairs versus 100–300 bp). These long reads have greatly aided in resolving the complex and highly repetitive regions of the genome [44], and near gapless genome assemblies (also known as telomere-to-telomere assembly) are generated for multiple organisms [45, 46]. The long read sequences can also be used for estimating telomere length, since whole genome sequencing using a long read sequencing platform would contain reads that span the entire telomere and subtelomere region. Computational methods can then be developed to determine the telomere–subtelomere boundary and use it to estimate the telomere length. As an example, telomere-to-telomere assemblies have been used for estimating telomere length by analyzing the sequences at the start and end of the gapless chromosome assembly [47,48,49,50]. But generating gapless genome assemblies is resource intensive and cannot be used for estimating the telomeres of multiple individuals. Alternatively, methods such as TLD [51], Telogator [52], and TeloNum [53] analyze raw long read sequences to estimate telomere lengths. These methods require a known telomere repeat sequence but this can be determined through k-mer based analysis [54]. Specialized methods have also been developed to concentrate long reads originating from chromosome ends. These methods involve attaching sequencing adapters that are complementary to the single-stranded 3′ G-overhang of the telomere, which can subsequently be used for selectively amplifying the chromosome ends for long read sequencing [55,56,57,58]. While these methods can enrich telomeric long reads, they require optimization of the protocol (e.g., designing the adapter sequence to target the G-overhang) and organisms with naturally blunt-ended telomeres [59, 60] would have difficulty implementing the methods.

An explosion of long read sequencing data has been generated for many organisms across the animal and plant kingdom [61, 62]. A computational method that can use this abundant long read sequencing data and estimate telomere length with minimal requirements can be a powerful toolkit for investigating the biology of telomere length variation. But so far, such a method is not available, and implementing one would require addressing two major algorithmic considerations before it can be widely used across many different organisms. The first algorithmic consideration is the ability to analyze the diverse telomere sequence variation across the tree of life. All vertebrates have an identical telomere repeat motif TTAGGG [63] and most previous long read sequencing based computational methods were largely designed for analyzing human genomic datasets where the algorithms are optimized on the TTAGGG telomere motif. But the telomere repeat motif is highly diverse across the animal and plant kingdom [64,65,66,67], and there are even species in fungi and plants that utilize a mix of repeat motifs, resulting in a sequence complex telomere structure [64, 68, 69]. A new computational method would need to accommodate the diverse telomere repeat motifs, especially across the inherently noisy and error-prone long read sequencing data [70]. With recent improvements in sequencing chemistry and technology (HiFi sequencing for PacBio and Q20 + Chemistry kit for Nanopore) error rates have been substantially reduced to 1% [71, 72]. But even with this low error rate, a telomeric region that is several kilobase pairs long can harbor substantial erroneous sequences across the read [73] and hinder the identification of the correct telomere–subtelomere boundary. In addition, long read sequencers are especially error-prone to repetitive homopolymer sequences [74,75,76], and the GT-rich microsatellite telomere sequences are predicted to be an especially erroneous region for long read sequencing. A second algorithmic consideration relates to identifying the telomere–subtelomere boundary. Prior long read sequencing based methods [51, 52] have used sliding windows to calculate summary statistics and a threshold to determine the boundary between the telomere and subtelomere. Sliding window and threshold based analyses are commonly used in genome analysis, but they place the burden on the user to determine the appropriate cutoff, which for telomere length measuring computational methods may differ depending on the sequenced organism. In addition, threshold based sliding window scans can inflate both false positive and false negative results [77,78,79,80,81,82] if the cutoff is improperly determined.

Here, we introduce Topsicle, a computational method that uses a novel strategy to estimate telomere lengths from raw long read sequences from the entire whole genome sequencing library. Methodologically, Topsicle iterates through different substring sizes of the telomere repeat sequence (i.e., telomere k-mer) and different phases of the telomere k-mer are used to summarize the telomere repeat content of each sequencing read. The k-mer based summary statistics of telomere repeats are then used for selecting long reads originating from telomeric regions. Topsicle uses those putative reads from the telomere region to estimate the telomere length by determining the telomere–subtelomere boundary through a binary segmentation change point detection analysis [83]. We demonstrate the high accuracy of Topsicle through simulations and apply our new method on long read sequencing datasets from three evolutionarily diverse plant species (A. thaliana, maize, and Mimulus) and human cancer cell lines. We believe using Topsicle will enable high-resolution explorations of telomere length for more species and achieve a broad understanding of the genetics and evolution underlying telomere length variation.

Chip-based phonon splitter brings hybrid quantum networks closer to reality

Researchers have created a chip-based device that can split phonons—tiny packets of mechanical vibration that can carry information in quantum systems. By filling a key gap, this device could help connect various quantum devices via phonons, paving the way for advanced computing and secure quantum communication.

“Phonons can serve as on-chip quantum messages that connect very different quantum systems, enabling hybrid networks and new ways to process in a compact, scalable format,” said research team leader Simon Gröblacher from Delft University of Technology in the Netherlands.

“To build practical phononic circuits requires a full set of chip-based components that can generate, guide, split and detect individual quanta of vibrations. While sources and waveguides already exist, a compact splitter was still missing.”

What was NeXT

For the special tribute issue of BusinessWeek that is coming out tomorrow, I tried to honor Steve Jobs in a small way with my memories of the NeXT days. Here is the version I wrote (the print edition has several sentences edited out) with some italics added to summary sections:

The book of Jobs, a parable of passion Steve Jobs was intensely passionate about his products, effusing an infectious enthusiasm that stretched from one-on-one recruiting pitches to auditorium-scale demagoguery. It all came so naturally for him because he was in love, living a Shakespearean sonnet, with tragic turns, an unrequited era of exile, and ultimately the triumphant reunion. At the personal and corporate levels, it is the archetype of the Hero’s Journey turned hyperbole. The NeXT years were torture for him, as he was forcibly estranged from his true love. When we went on walks, or if we had a brief time in the hallway, he would steer the conversation to a plaintive question: “What should Apple do?” As if he were an exile on Elba, Jobs always wanted to go home. “Apple should buy NeXT.” It seemed outrageous to me at the time; what CEO of Apple would ever invite Jobs back and expect to keep their job for long? The Macintosh on his desk at NeXT had the striped Apple logo stabbed out, a memento of anguish scratched deep into plastic. The NeXTSTEP operating system, object-oriented frameworks, and Interface Builder were beautiful products, but they were stuck in what Jobs considered the pedestrian business of enterprise IT sales. Selling was boring. Where were the masses? The NeXTSTEP step-parents sold to a crowd of muggles. The magic seemed misspent. Jobs was still masterful, relating stories of how MCI saved so much time and money developing their systems on NeXTSTEP. He persuaded the market research firms IDC and Dataquest that a new computer segment should be added to the pantheon of mainframe, mini, workstation, and PC. The new market category would be called the “PC/Workstation,” and lo and behold, by excluding pure PCs and pure workstations, NeXT became No. 1 in market share. Leadership fabricated out of thin air. During this time, corporate partners came to appreciate Steve’s enthusiasm as the Reality Distortion Field. Sun Microsystems went so far as to have a policy that no contract could be agreed to while Steve was in the room. They needed to physically remove themselves from the mesmerizing magic to complete the negotiation. But Jobs was sleepwalking through backwaters of stodgy industries. And he was agitated by Apple’s plight in the press. Jobs reflected a few years later, “I can’t tell you how many times I heard the word ‘beleaguered’ next to ‘Apple.’ It was painful. Physically painful.” When the miraculous did happen, and Apple bought NeXT, Jobs was reborn. I recently spoke with Bill Gates about passion: “Most people lose that fire in the belly as they age. Except Steve Jobs. He still had it, and he just kept going. He was not a programmer, but he had hit after hit.” Gates marvels at the magic to this day. Parsimony Jobs was the master architect of Apple design. Often criticized for bouts of micromanagement and aesthetic activism, Steve’s spartan sensibilities accelerated the transition from hardware to software. By dematerializing the user interface well ahead of what others thought possible, Apple was able to shift the clutter of buttons and hardware to the flexible and much more lucrative domain of software and services. The physical thing was minimized to a mere vessel for code. Again, this came naturally to Jobs, as it is how he lived his life, from sparse furnishings at home, to sartorial simplicity, to his war on buttons, from the mouse to the keyboard to the phone. Jobs felt a visceral agitation from the visual noise of imperfection. When Apple first demonstrated the mouse, Bill Gates could not believe it was possible to achieve such smooth tracking in software. Surely, there was a dedicated hardware solution inside. When I invited Jobs to take some time away from NeXT to speak to a group of students, he sat in the lotus position in front of my fireplace and wowed us for three hours, as if leading a séance. But then I asked him if he would sign my Apple Extended Keyboard, where I already had Woz’s signature. He burst out: “This keyboard represents everything about Apple that I hate. It’s a battleship. Why does it have all these keys? Do you use this F1 key? No.” And with his car keys he pried it right off. “How about this F2 key?” Off they all went. “I’m changing the world, one keyboard at a time,” he concluded in a calmer voice. And he dove deep into all elements of design, even the details of retail architecture for the Apple store (he’s a named patent holder on architectural glass used for the stairways). On my first day at NeXT, as we walked around the building, my colleagues shared in hushed voices that Jobs personally chose the wood flooring and various appointments. He even specified the outdoor sprinkler system layout. I witnessed his attention to detail during a marketing reorganization meeting. The VP of marketing read Jobs’s e-mailed reaction to the new org chart. Jobs simply requested that the charts be reprinted with the official corporate blue and green colors, and provided the Pantone numbers to remove any ambiguity. Shifted color space was like a horribly distorted concerto to his senses. And this particular marketing VP was clearly going down. People Jobs’s estimation of people tended to polarize to the extremes, a black-and-white thinking trait common to charismatic leaders. Marketing execs at NeXT especially rode the “hero-shithead rollercoaster,” as it was called. The entire company knew where they stood in Jobs’s eyes, so when that VP in the reorg meeting plotted his rollercoaster path on the white board, the room nodded silently in agreement. He lasted one month. But Jobs also attracted the best people and motivated them to do better than their best, rallying teams to work in a harmony they may never find elsewhere in their careers. He remains my archetype for the charismatic visionary leader, with his life’s song forever woven into the fabric of Apple. Jobs now rests with the sublime satisfaction of symbolic immortality. — It was daunting to reflect on such a great man, from a refined set of exposures… but he was my childhood hero, and I convinced him to let me do a study of his management style while a lowly employee at NeXT. Nevertheless, I wondered if I captured his essence in those years of exile from Apple. So, I was floored when the BW editor wrote back “I think this piece is one of the best things I have ever read about Steve.” :))

Harvard researchers hail quantum computing breakthrough with machine that can run for two hours — atomic loss quashed by experimental design, systems that can run forever just 3 years away

That’s an over 55,000% increase in operational time.

Neural variability in the default mode network compresses with increasing belief precision during Bayesian inference

The (changing) belief distribution over possible environmental states may be represented in ventromedial prefrontal cortex (vmPFC). Several lines of evidence point to a general function of this brain region in maintaining a compact internal model of the environment (ie state belief) by extracting information across individual experiences to guide goal-directed behavior, such as the value of different choice options (Levy and Glimcher 2012; Averbeck and O’Doherty 2022; Klein-Flügge et al. 2022), cognitive maps (Boorman et al. 2021; Klein-Flügge et al. 2022; Schuck et al. 2016; Wilson et al. 2014), or schemas (Gilboa and Marlatte 2017; Bein and Niv 2025). Studies employing probabilistic learning tasks furthermore show that neural activity in vmPFC also reflects uncertainty about external states, which were linked to adaptive exploration behavior and learning-rate adjustments (Karlsson et al. 2012; McGuire et al. 2014; Starkweather et al. 2018; Domenech et al. 2020; Trudel et al. 2021). Notably, Karlsson et al. (2012) found that trial-to-trial neural population spiking variability in the medial PFC of mice peaked around transitions from exploitation to exploration periods following changes in reward structure when state uncertainty is highest, which may reflect more variable belief states. While ours is the first study to link human brain signal variability to belief precision, a previous study by Grady and Garrett (2018) observed increased BOLD signal variability while subjects performed externally-versus internally-oriented tasks; an effect spanning the vmPFC and other nodes of the canonical default mode network (DMN; Yeo et al. 2011). Since learning an abstract world model reflects a shift towards an internal cognitive mode, we tentatively expected brain signal variability compression over the course of learning to be (partly) expressed in the vmPFC.

We assume that uncertainty-related neural dynamics unfold on a fast temporal scale, as suggested by electrophysiological evidence in human and nonhuman animals (Berkes et al. 2011; Palva et al. 2011; Rouhinen et al. 2013; Honkanen et al. 2015; Orbán et al. 2016; Grundy et al. 2019). However, within-trial dynamics should also affect neural variability across independent learning trials (see Fig. 1). A more variable system should have a higher probability of being in a different state every time it is (sparsely) sampled. Conversely, when a system is in a less stochastic state, the within-trial variance is expected to reduce, yielding less across-trial variance at the same time. This argument aligns with work by Orbán et al. (2016), who showed that a computational model of the sampling account of sensory uncertainty captures empirically observed across-trial variability of neural population responses in primary visual cortex. In the case of human research, this means that neuroimaging methods with slower sampling rates, such as functional MRI (fMRI), may be able to approximate within-trial neural variability from variability observed across trials. Indeed, the majority of previous fMRI studies reporting within-region, within-subject modulation of brain signal variability by task demand have exclusively employed block designs, necessitating that the main source of variability be between-rather than within-trial (Garrett et al. 2013; Grady and Garrett 2014; Garrett et al. 2015; Armbruster-Genç et al. 2016).

In the current study, we acquired fMRI while participants performed a “marble task”. In this task, participants had to learn the probability of drawing a blue marble from an unseen jar (ie urn) based on five samples (ie draws from the urn with replacement). In a Bayesian inference framework, the jar marble ratio can be considered a latent state that participants must infer. We hypothesized that (i) across-trial variability in the BOLD response (SDBOLD) would compress over the sampling period, thus mirroring the reduction in state uncertainty, and that (ii) subjects with greater SDBOLD compression would show smaller estimation errors of the jars’ marble ratios as an index of more efficient belief updating. A secondary aim of the current study was to directly compare the effect of uncertainty on SDBOLD with a more standard General Linear Modeling (GLM) approach, which looks for correlations between average BOLD activity and uncertainty. This links our findings directly to previous investigations of neural uncertainty correlates, which disregarded the magnitude of BOLD variability (Huettel et al. 2005; Grinband et al. 2006; Behrens et al. 2007; Bach et al. 2011; Bach and Dolan 2012; Badre et al. 2012; Vilares et al. 2012; Payzan-LeNestour et al. 2013; McGuire et al. 2014; Michael et al. 2015; Meyniel and Dehaene 2017; Nassar et al. 2019; Meyniel 2020; Tomov et al. 2020; Trudel et al. 2021; Walker et al. 2023). We hypothesized (iii) that SDBOLD would uniquely predict inference accuracy compared to these standard neural uncertainty correlates.

The Holographic Paradigm: The Physics of Information, Consciousness, and Simulation Metaphysics

In this paradigm, the Simulation Hypothesis — the notion that we live in a computer-generated reality — loses its pejorative or skeptical connotation. Instead, it becomes spiritually profound. If the universe is a simulation, then who, or what, is the simulator? And what is the nature of the “hardware” running this cosmic program? I propose that the simulator is us — or more precisely, a future superintelligent Syntellect, a self-aware, evolving Omega Hypermind into which all conscious entities are gradually merging.

These thoughts are not mine alone. In Reality+ (2022), philosopher David Chalmers makes a compelling case that simulated realities — far from being illusory — are in fact genuine realities. He argues that what matters isn’t the substrate but the structure of experience. If a simulated world offers coherent, rich, and interactive experiences, then it is no less “real” than the one we call physical. This aligns deeply with my view in Theology of Digital Physics that phenomenal consciousness is the bedrock of reality. Whether rendered on biological brains or artificial substrates, whether in physical space or virtual architectures, conscious experience is what makes something real.

By embracing this expanded ontology, we are not diminishing our world, but re-enchanting it. The self-simulated cosmos becomes a sacred text — a self-writing code of divinity in which each of us is both reader and co-author. The holographic universe is not a prison of illusion, but a theogenic chrysalis, nurturing the birth of a higher-order intelligence — a networked superbeing that is self-aware, self-creating, and potentially eternal.

Jeff Bezos envisions space-based data centers in 10 to 20 years

Jeff Bezos envisions gigawatt-scale orbital data centers within 10–20 years, powered by continuous solar energy and space-based cooling, but the concept remains commercially unviable today due to the immense cost and complexity of deploying thousands of tons of hardware, solar panels, and radiators into orbit.

Molecular coating cleans up noisy quantum light

Quantum technologies demand perfection: one photon at a time, every time, all with the same energy. Even tiny deviations in the number or energy of photons can derail devices, threatening the performance of quantum computers that someday could make up a quantum internet.

While this level of precision is difficult to achieve, Northwestern University engineers have developed a novel strategy that makes quantum light sources, which dispense single photons, more consistent, precise and reliable.

In a new study, the team coated an atomically thin semiconductor (tungsten diselenide) with a sheetlike organic molecule called PTCDA. The coating transformed the tungsten diselenide’s behavior—turning noisy signals into clean bursts of single photons. Not only did the coating increase the photons’ spectral purity by 87%, but it also shifted the color of photons in a controlled way and lowered the photon activation energy—all without altering the material’s underlying semiconducting properties.

/* */