What this means in real time is that researchers using these maps do not know if there are any errors or issues ahead of them, nor do they know if these errors are part of the research design. Nevertheless, this is all they have to work with, so they have to make a decision based on this limited information, and doing so will always have high costs in terms of the ignition attempt, which is expensive.
To overcome this, the team at the NIF created a new way to create these “maps” by merging past data with high-fidelity physics simulations and the knowledge of experts. This was then fed into a supercomputer that ran statistical assessments in the course of over 30 million CPU hours. Effectively, this allows the researchers to see all the ways that things can go wrong and to pre-emptively assess their experimental designs. This saves a lot of time and, more importantly, money.
The team tested this approach on an experiment they ran in 2022, and, after a few changes to the model’s physics, was able to predict the outcome with an accuracy above 70 percent.