Toggle light / dark theme

A new way to measure uncertainty provides an important step toward confidence in AI model training

It’s obvious when a dog has been poorly trained. It doesn’t respond properly to commands. It pushes boundaries and behaves unpredictably. The same is true with a poorly trained artificial intelligence (AI) model. Only with AI, it’s not always easy to identify what went wrong with the training.

Research scientists globally are working with a variety of AI models that have been trained on experimental and theoretical data. The goal: to predict a material’s properties before taking the time and expense to create and test it. They are using AI to design better medicines and industrial chemicals in a fraction of the time it takes for experimental trial and error.

But how can they trust the answers that AI models provide? It’s not just an academic question. Millions of investment dollars can ride on whether AI model predictions are reliable.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.