Toggle light / dark theme

NVIDIA Offers “Vera” CPU as a Standalone Competitor to Intel’s Xeon and AMD’s EPYC Processors

NVIDIA’s integration of AI systems now extends beyond GPUs with generic Arm CPUs. The company is introducing its high-performance “Vera” CPUs as a standalone product, marking its first entry as a competitor to Intel Xeon and AMD EPYC server-grade CPUs. NVIDIA CEO Jensen Huang confirmed this new venture in an interview with Bloomberg, stating, “For the very first time, we’re going to be offering Vera CPUs. Vera is such an incredible CPU. We’re going to offer Vera CPUs as a standalone part of the infrastructure. You can now run your computing stack not only on NVIDIA GPUs but also on NVIDIA CPUs. Vera is completely revolutionary… Coreweave will have to act quickly if they want to be the first to implement Vera CPUs. We haven’t announced any of our CPU design wins yet, but there will be many.”

The “Vera” CPU is equipped with 88 custom Armv9.2 “Olympus” cores that utilize Spatial Multithreading technology, allowing it to handle 176 threads through physical resource partitioning. These custom cores support native FP8 processing, enabling some AI workloads to be executed directly on the CPU with 6x128-bit SVE2 implementation. The chip offers 1.2 TB/s of memory bandwidth and supports up to 1.5 TB of LPDDR5X memory, making it ideal for memory-intensive computing tasks. However, with the CPU now being offered as a standalone solution, it is unclear whether there will be any classic memory options like DDR5 RDIMMs, or if the CPU will rely solely on SOCAMM LPDDR5X. A second-generation Scalable Coherency Fabric provides 3.4 TB/s of bisection bandwidth, connecting the cores across a unified monolithic die and eliminating the latency issues common in chiplet architectures. Additionally, NVIDIA has integrated a second-generation NVLink Chip-to-Chip technology, delivering up to 1.

Advancing regulatory variant effect prediction with AlphaGenome

What makes it special is its versatility. Where older models might only predict how a mutation affects gene activity, AlphaGenome forecasts thousands of biological outcomes simultaneously—whether a variant will alter how DNA folds, change how proteins dock onto genes, disrupt the splicing machinery that edits genetic messages, or modify histone “spools” that package DNA. It’s essentially a universal translator for genetic regulatory language.


AlphaGenome is a deep learning model designed to learn the sequence basis of diverse molecular phenotypes from human and mouse DNA (Fig. 1a). It simultaneously predicts 5,930 human or 1,128 mouse genome tracks across 11 modalities covering gene expression (RNA-seq, CAGE and PRO-cap), detailed splicing patterns (splice sites, splice site usage and splice junctions), chromatin state (DNase, ATAC-seq, histone modifications and transcription factor binding) and chromatin contact maps. These span a variety of biological contexts, such as different tissue types, cell types and cell lines (see Supplementary Table 1 for the summary and Supplementary Table 2 for the complete metadata). These predictions are made on the basis of 1-Mb of DNA sequence, a context length designed to encompass a substantial portion of the relevant distal regulatory landscape. For instance, 99% (465 of 471) of validated enhancer–gene pairs fall within 1 Mb (ref. 12).

AlphaGenome uses a U-Net-inspired2,13 backbone architecture (Fig. 1a and Extended Data Fig. 1a) to efficiently process input sequences into two types of sequence representations: one-dimensional embeddings (at 1-bp and 128-bp resolutions), which correspond to representations of the linear genome, and two-dimensional embeddings (2,048-bp resolution), which correspond to representations of spatial interactions between genomic segments. The one-dimensional embeddings serve as the basis for genomic track predictions, whereas the two-dimensional embeddings are the basis for predicting pairwise interactions (contact maps). Within the architecture, convolutional layers model local sequence patterns necessary for fine-grained predictions, whereas transformer blocks model coarser but longer-range dependencies in the sequence, such as enhancer–promoter interactions.

Researchers discover hundreds of cosmic anomalies with help from AI

A team of astronomers have used a new AI-assisted method to search for rare astronomical objects in the Hubble Legacy Archive. The team sifted through nearly 100 million image cutouts in just two and a half days, uncovering nearly 1400 anomalous objects, more than 800 of which had never been documented before.

Initial access hackers switch to Tsundere Bot for ransomware attacks

A prolific initial access broker tracked as TA584 has been observed using the Tsundere Bot alongside XWorm remote access trojan to gain network access that could lead to ransomware attacks.

Proofpoint researchers have been tracking TA584’s activity since 2020 and say that the threat actor has significantly increased its operations recently, introducing a continuous attack chain that undermines static detection.

Tsundere Bot was first documented by Kaspersky last year and attributed to a Russian-speaking operator with links to the 123 Stealer malware.

New sandbox escape flaw exposes n8n instances to RCE attacks

Two vulnerabilities in the n8n workflow automation platform could allow attackers to fully compromise affected instances, access sensitive data, and execute arbitrary code on the underlying host.

Identified as CVE-2026–1470 and CVE-2026–0863, the vulnerabilities were discovered and reported by researchers at DevSecOps company JFrog.

Despite requiring authentication, CVE-2026–1470 received a critical severity score of 9.9 out of 10. JFrog explained that the critical rating was due to arbitrary code execution occurring in n8n’s main node, which allows complete control over the n8n instance.

Wave of Suicides Hits as India’s Economy Is Ravaged by AI

As Rest of World reports, rising anxiety over the influence of AI, on top of already-grueling 90-hour workweeks, has proven devastating for workers. While it’s hard to single out a definitive cause, a troubling wave of suicides among tech workers highlights these unsustainable conditions.

Complicating the picture is a lack of clear government data on the tragic deaths. While it’s impossible to tell whether they are more prevalent among IT workers, experts told Rest of World that the mental health situation in the tech industry is nonetheless “very alarming.”

The prospect of AI making their careers redundant is a major stressor, with tech workers facing a “huge uncertainty about their jobs,” as Indian Institute of Technology Kharagpur senior professor of computer science and engineering Jayanta Mukhopadhyay told Rest of World.

/* */