Imagine watching a favorite movie when suddenly the sound stops. The data representing the audio is missing. All that’s left are images. What if artificial intelligence (AI) could analyze each frame of the video and provide the audio automatically based on the pictures, reading lips and noting each time a foot hits the ground?
That’s the general concept behind a new AI that fills in missing data about plasma, the fuel of fusion, according to Azarakhsh Jalalvand of Princeton University. Jalalvand is the lead author on a paper about the AI, known as Diag2Diag, that was recently published in Nature Communications.
“We have found a way to take the data from a bunch of sensors in a system and generate a synthetic version of the data for a different kind of sensor in that system,” he said. The synthetic data aligns with real-world data and is more detailed than what an actual sensor could provide. This could increase the robustness of control while reducing the complexity and cost of future fusion systems. “Diag2Diag could also have applications in other systems such as spacecraft and robotic surgery by enhancing detail and recovering data from failing or degraded sensors, ensuring reliability in critical environments.”