Toggle light / dark theme

World’s first fast-neutron nuclear reactor to power AI data centers

French startup Stellaria secures its first power reservation from Equinix for Stellarium, the world’s first fast-neutron reactor that reduces nuclear waste.

The agreement will allow Equinix data centres to leverage the reactor’s energy autonomy, supporting sustainable, decarbonized operations and powering AI capabilities with clean nuclear energy.

The Stellarium reactor, proposed by Stellaria, is a fourth-generation fast-neutron molten-salt design that uses liquid chloride salt fuel and is engineered to operate on a closed fuel cycle.

TACC’s “Horizon” Supercomputer Sets The Pace For Academic Science

As we expected, the “Vista” supercomputer that the Texas Advanced Computing Center installed last year as a bridge between the current “Stampede-3” and “Frontera” production system and its future “Horizon” system coming next year was indeed a precursor of the architecture that TACC would choose for the Horizon machine.

What TACC does – and doesn’t do – matters because as the flagship datacenter for academic supercomputing at the National Science Foundation, the company sets the pace for those HPC organizations that need to embrace AI and that have not only large jobs that require an entire system to run (so-called capability-class machines) but also have a wide diversity of smaller jobs that need to be stacked up and pushed through the system (making it also a capacity-class system). As the prior six major supercomputers installed at TACC aptly demonstrate, you can have the best of both worlds, although you do have to make different architectural choices (based on technology and economics) to accomplish what is arguably a tougher set of goals.

Some details of the Horizon machine were revealed at the SC25 supercomputing conference last week, which we have been mulling over, but there are still a lot of things that we don’t know. The Horizon that will be fired up in the spring of 2026 is a bit different than we expected, with the big change being a downshift from an expected 400 petaflops of peak FP64 floating point performance down to 300 petaflops. TACC has not explained the difference, but it might have something to do with the increasing costs of GPU-accelerated systems. As far as we know, the budget for the Horizon system, which was set in July 2024 and which includes facilities rental from Sabey Data Centers as well as other operational costs, is still $457 million. (We are attempting to confirm this as we write, but in the wake of SC25 and ahead of the Thanksgiving vacation, it is hard to reach people.)

Google Quantum AI realizes three dynamic surface code implementations

Quantum computers are computing systems that process information leveraging quantum mechanical effects. These computers rely on qubits (i.e., the quantum equivalent of bits), which can store information in a mixture of states, as opposed to binary states (0 or 1).

While quantum computers could tackle some computational and optimization problems faster and more effectively than classical computers, they are also inherently more prone to errors. This is because qubits can be easily disturbed by disturbances from their surrounding environment, also referred to as noise.

Over the past decades, quantum engineers and physicists have been trying to develop approaches to correct noise-related errors, also known as quantum error correction (QEC) techniques. While some of these codes achieved promising results in small-scale tests, reliably implementing them on real circuits is often challenging.

Tiny reconfigurable robots can help manage carbon dioxide levels in confined spaces

Vehicles and buildings designed to enable survival in extreme environments, such as spacecraft, submarines and sealed shelters, heavily rely on systems for the management of carbon dioxide (CO2). These are technologies that can remove and release CO2, ensuring that the air remains breathable for a long time.

Most existing systems for the capture and release of CO2 consume a lot of energy, as they rely on materials that need to be heated to high temperatures to release the gas again after capturing it. Some engineers have thus been trying to devise more energy-efficient methods to manage CO2 in confined spaces.

Researchers at Guangxi University in China have developed new reconfigurable micro/nano-robots that can reversibly capture CO2 at significantly lower temperatures than currently used carbon management systems.

BrainBody-LLM algorithm helps robots mimic human-like planning and movement

Large language models (LLMs), such as the model underpinning the functioning of OpenAI’s platform ChatGPT, are now widely used to tackle a wide range of tasks, ranging from sourcing information to the generation of texts in different languages and even code. Many scientists and engineers also started using these models to conduct research or advance other technologies.

In the context of robotics, LLMs have been found to be promising for the creation of robot policies derived from a user’s instructions. Policies are essentially “rules” that a robot needs to follow to correctly perform desired actions.

Researchers at NYU Tandon School of Engineering recently introduced a new algorithm called BrainBody-LLM, which leverages LLMs to plan and refine the execution of a robot’s actions. The new algorithm, presented in a paper published in Advanced Robotics Research, draws inspiration from how the human brain plans actions and fine-tunes the body’s movements over time.

Researchers pioneer pathway to mechanical intelligence by breaking symmetry in soft composite materials

A research team has developed soft composite systems with highly programmable, asymmetric mechanical responses. By integrating “shear-jamming transitions” into compliant polymeric solids, this innovative work enhances key material functionalities essential for engineering mechano-intelligent systems—a major step toward the development of next-generation smart materials and devices.

The work is published in the journal Nature Materials.

In engineering fields such as soft robotics, synthetic tissues, and flexible electronics, materials that exhibit direction-dependent responses to external stimuli are crucial for realizing intelligent functions.

Intelligent photodetectors ‘sniff and seek’ like retriever dogs to recognize materials directly from light spectra

Researchers at the University of California, Los Angeles (UCLA), in collaboration with UC Berkeley, have developed a new type of intelligent image sensor that can perform machine-learning inference during the act of photodetection itself.

Reported in Science, the breakthrough redefines how spectral imaging, machine vision and AI can be integrated within a single semiconductor device.

Traditionally, spectral cameras capture a dense stack of images, each image corresponding to a different wavelength, and then transfer this large dataset to digital processors for computation and scene analysis. This workflow, while powerful, creates a severe bottleneck: the hardware must move and process massive amounts of data, which limits speed, power efficiency, and the achievable spatial–spectral resolution.

Public GitLab repositories exposed more than 17,000 secrets

After scanning all 5.6 million public repositories on GitLab Cloud, a security engineer discovered more than 17,000 exposed secrets across over 2,800 unique domains.

Luke Marshall used the TruffleHog open-source tool to check the code in the repositories for sensitive credentials like API keys, passwords, and tokens.

The researcher previously scanned Bitbucket, where he found 6,212 secrets spread over 2.6 million repositories. He also checked the Common Crawl dataset that is used to train AI models, which exposed 12,000 valid secrets.

Your body may already have a molecule that helps fight Alzheimer’s

Spermine, a small but powerful molecule in the body, helps neutralize harmful protein accumulations linked to Alzheimer’s and Parkinson’s. It encourages these misfolded proteins to gather into manageable clumps that cells can more efficiently dispose of through autophagy. Experiments in nematodes show that spermine also enhances longevity and cellular energy production. These insights open the door to targeted therapies powered by polyamines and advanced AI-driven molecular design.

/* */