Toggle light / dark theme

We’re announcing the world’s first scalable, error-corrected, end-to-end computational chemistry workflow. With this, we are entering the future of computational chemistry.

Quantum computers are uniquely equipped to perform the complex computations that describe chemical reactions – computations that are so complex they are impossible even with the world’s most powerful supercomputers.

However, realizing this potential is a herculean task: one must first build a large-scale, universal, fully fault-tolerant quantum computer – something nobody in our industry has done yet. We are the farthest along that path, as our roadmap, and our robust body of research, proves. At the moment, we have the world’s most powerful quantum processors, and are moving quickly towards universal fault tolerance. Our commitment to building the best quantum computers is proven again and again in our world-leading results.

Plasma—the electrically charged fourth state of matter—is at the heart of many important industrial processes, including those used to make computer chips and coat materials.

Simulating those plasmas can be challenging, however, because millions of math operations must be performed for thousands of points in the simulation, many times per second. Even with the world’s fastest supercomputers, scientists have struggled to create a kinetic simulation—which considers individual particles—that is detailed and fast enough to help them improve those manufacturing processes.

Now, a new method offers improved stability and efficiency for kinetic simulations of what’s known as inductively coupled plasmas. The method was implemented in a developed as part of a private-public partnership between the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) and chip equipment maker Applied Materials Inc., which is already using the tool. Researchers from the University of Alberta, PPPL and Los Alamos National Laboratory contributed to the project.

A research team from the Department of Energy’s Oak Ridge National Laboratory, in collaboration with North Carolina State University, has developed a simulation capable of predicting how tens of thousands of electrons move in materials in real time, or natural time rather than compute time.

The project reflects a longstanding partnership between ORNL and NCSU, combining ORNL’s expertise in time-dependent quantum methods with NCSU’s advanced quantum simulation platform developed under the leadership of Professor Jerry Bernholc.

Using the Oak Ridge Leadership Computing Facility’s Frontier supercomputer, the world’s first to break the exascale barrier, the research team developed a real-time, time-dependent density functional theory, or RT-TDDFT, capability within the open-source Real-space Multigrid, or RMG, code to model systems of up to 24,000 electrons.

Analyzing massive datasets from nuclear physics experiments can take hours or days to process, but researchers are working to radically reduce that time to mere seconds using special software being developed at the Department of Energy’s Lawrence Berkeley and Oak Ridge national laboratories.

DELERIA—short for Distributed Event-Level Experiment Readout and Integrated Analysis—is a novel software platform designed specifically to support the GRETA spectrometer, a cutting-edge instrument for nuclear physics experiments. The Gamma Ray Energy Tracking Array (GRETA), is currently under construction at Berkeley Lab and is scheduled to be installed in 2026 at the Facility for Rare Isotope Beams (FRIB), at Michigan State University.

The software will enable GRETA to stream data directly to the nation’s leading computing centers with the goal of analyzing large datasets in seconds. The data will be sent via the Energy Sciences Network, or ESnet. This will allow researchers to make critical adjustments to the experiment as it is taking place, leading to increased scientific productivity with significantly faster, more accurate results.

Can AI speed up aspects of the scientific process? Microsoft appears to think so.

At the company’s Build 2025 conference on Monday, Microsoft announced Microsoft Discovery, a platform that taps agentic AI to “transform the [scientific] discovery process,” according to a press release provided to TechCrunch. Microsoft Discovery is “extensible,” Microsoft says, and can handle certain science-related workloads “end-to-end.”

“Microsoft Discovery is an enterprise agentic platform that helps accelerate research and discovery by transforming the entire discovery process with agentic AI — from scientific knowledge reasoning to hypothesis formulation, candidate generation, and simulation and analysis,” explains Microsoft in its release. “The platform enables scientists and researchers to collaborate with a team of specialized AI agents to help drive scientific outcomes with speed, scale, and accuracy using the latest innovations in AI and supercomputing.”

China has begun launching satellites for a giant computer network in space, according to the China Aerospace Science and Technology Corporation.

Newsweek contacted the company and the United States Space Force for comment.

Why It Matters

Space is an increasing frontier for competition between China and the United States. Putting a computer network in space marks a step change from using satellites for sensing and communications, but leaving them dependent on their connections to Earth for data processing.

Quantum annealing is a specific type of quantum computing that can use quantum physics principles to find high-quality solutions to difficult optimization problems. Rather than requiring exact optimal solutions, the study focused on finding solutions within a certain percentage (≥1%) of the optimal value.

Many real-world problems don’t require exact solutions, making this approach practically relevant. For example, in determining which stocks to put into a mutual fund, it is often good enough to just beat a leading market index rather than beating every other stock portfolio.

Astronomers have developed a computer simulation to explore, in unprecedented detail, magnetism and turbulence in the interstellar medium (ISM)—the vast ocean of gas and charged particles that lies between stars in the Milky Way galaxy.

Described in a study published in Nature Astronomy, the model is the most powerful to date, requiring the computing capability of the SuperMUC-NG supercomputer at the Leibniz Supercomputing Center in Germany. It directly challenges our understanding of how magnetized turbulence operates in astrophysical environments.

James Beattie, the paper’s lead author and a postdoctoral researcher at the Canadian Institute for Theoretical Astrophysics (CITA) at the University of Toronto, is hopeful the model will provide new insights into the ISM, the magnetism of the Milky Way galaxy as a whole, and astrophysical phenomena such as star formation and the propagation of cosmic rays.

Tesla is developing a terawatt-level supercomputer at Giga Texas to enhance its self-driving technology and AI capabilities, positioning the company as a leader in the automotive and renewable energy sectors despite current challenges ## ## Questions to inspire discussion.

Tesla’s Supercomputers.

💡 Q: What is the scale of Tesla’s new supercomputer project?

A: Tesla’s Cortex 2 supercomputer at Giga Texas aims for 1 terawatt of compute with 1.4 billion GPUs, making it 3,300x bigger than today’s top system.

💡 Q: How does Tesla’s compute power compare to Chinese competitors?

A: Tesla’s FSD uses 3x more compute than Huawei, Xpeng, Xiaomi, and Li Auto combined, with BYD not yet a significant competitor. Full Self-Driving (FSD)