Most contemporary artificial intelligence (AI) systems learn to complete tasks via machine learning and deep learning. Machine learning is a computational approach that allows models to uncover patterns in data that are useful for making predictions. Deep learning, on the other hand, is a subset of machine learning that entails the use of multi-layered neural networks, which can autonomously extract features and learn complex patterns from unstructured data, sometimes with little or no human supervision.
Many AI systems trained with these approaches also produce confidence scores for their predictions. These scores are essentially estimates of how probable it is for a specific prediction to be accurate. Past studies suggest that in many cases, AI systems are overconfident and assign high confidence scores to wrong answers, or even present inaccurate information as a fact. This limits their reliability, particularly in high-stakes applications where wrong predictions can have serious consequences.
Researchers at the Korea Advanced Institute of Science and Technology recently introduced a new brain-inspired training approach that could yield more realistic AI confidence estimates. Their proposed strategy, introduced in a paper published in Nature Machine Intelligence, entails briefly training artificial neural networks on random noise (i.e., data with no meaningful patterns) and arbitrary outputs, so that they can learn to produce more realistic confidence estimates before learning specific tasks.








