In a bold blend of art and biology, poetry meets an unkillable microbe

You might have seen an interesting phrase popping up in your social media feeds lately: “Chiron is in retrograde.” If you’re anything like me, you’ve never heard of Chiron before—and I’m a professional astronomer.
So what is Chiron, and what does it mean to be in retrograde? The short answer is that Chiron is an asteroid-slash-comet orbiting somewhere past Jupiter and Saturn. And until January 2026, it’s going to look like it’s going backwards in the sky. If you can spot it.
But there’s a bit more to the story.
It’s crossing the Solar System at 58 kilometers per second.
You see those glowing patches of orangey-red? That isn’t good news.
NVIDIA today announced new NVIDIA Omniverse™ libraries and NVIDIA Cosmos™ world foundation models (WFMs) that accelerate the development and deployment of robotics solutions.
A new artificial intelligence tool developed by researchers at the University of Hawai’i (UH) at Mānoa is making it easier for scientists to explore complex geoscience data—from tracking sea levels on Earth to analyzing atmospheric conditions on Mars.
Called the Intelligent Data Exploring Assistant (IDEA), the software framework combines the power of large language models, like those used in ChatGPT, with scientific data, tailored instructions, and computing resources.
By simply providing questions in everyday language, researchers can ask IDEA to retrieve data, run analyses, generate plots, and even review its own results—opening up new possibilities for research, education, and scientific discovery.
Earth almost risked a Solar Storm as powerful as the 1859 Carrington Event.
This notebook provides a step-by-step guide on how to optimizing gpt-oss models using NVIDIA’s TensorRT-LLM for high-performance inference. TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in performant way.
Gradient-boosted decision trees (GBDTs) power everything from real-time fraud filters to petabyte-scale demand forecasts. XGBoost open source library has long been the tool of choice thanks to state-of-the-art accuracy, SHAP-ready explainability, and flexibility to run on laptops, multi-GPU nodes, or Spark clusters. XGBoost version 3.0 was developed with scalability as its north star. A single NVIDIA GH200 Grace Hopper Superchip can now process datasets from gigabyte scale all the way to 1 terabyte (TB) scale.
The coherent memory architecture allows the new external-memory engine to stream data over the 900 GB/s NVIDIA NVLink-C2C, so a 1 TB model can be trained in minutes—up to 8x faster than a 112-core (dual socket) CPU box. This reduces the need for complex multinode GPU clusters, and makes scalability simpler to achieve.
This post explains new features and enhancements in the milestone XGBoost 3.0 release, including a deep dive into external memory and how it leverages the Grace Hopper Superchip to reach 1 TB scale.