Tech

A new way to measure uncertainty provides an important step toward confidence in AI model training

Share
Share
It's time to get comfortable with uncertainty in AI model training
Schematic of the MACE-MP-0 readout ensemble and quantile model. Credit: npj Computational Materials (2025). DOI: 10.1038/s41524-025-01572-y

It’s obvious when a dog has been poorly trained. It doesn’t respond properly to commands. It pushes boundaries and behaves unpredictably. The same is true with a poorly trained artificial intelligence (AI) model. Only with AI, it’s not always easy to identify what went wrong with the training.

Research scientists globally are working with a variety of AI models that have been trained on experimental and theoretical data. The goal: to predict a material’s properties before taking the time and expense to create and test it. They are using AI to design better medicines and industrial chemicals in a fraction of the time it takes for experimental trial and error.

But how can they trust the answers that AI models provide? It’s not just an academic question. Millions of investment dollars can ride on whether AI model predictions are reliable.

Now, a research team from the Department of Energy’s Pacific Northwest National Laboratory has developed a method to determine how well a class of AI models called neural network potentials has been trained. Further, it can identify when a prediction is outside the boundaries of its training and where it needs more training to improve—a process called active learning.

The research team, led by PNNL data scientists Jenna Bilbrey Pope and Sutanay Choudhury, describes how the new uncertainty quantification method works in a research article published in npj Computational Materials.

The team is also making the method publicly available on GitHub as part of its larger repository called Scalable Neural network Atomic Potentials (SNAP) to anyone who wants to apply it to their own work.

“We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high,” said Bilbrey Pope. “This is common for most deep neural networks. But a model trained with SNAP gives a metric that mitigates this overconfidence. Ideally, you’d want to look at both prediction uncertainty and training data uncertainty to assess your overall model performance.”

Instilling trust in AI model training to speed discovery

Research scientists want to take advantage of the speed of AI predictions, but right now there’s a tradeoff between speed and accuracy. It’s true that an AI model can make predictions in seconds that might take a supercomputer 12 hours to compute using traditional computationally intensive methods. But chemists and materials scientists still see AI as a black box.

The PNNL data science team’s uncertainty measurement provides a way to understand how much they should trust an AI prediction.

“AI should be able to accurately detect its knowledge boundaries,” said Choudhury. “We want our AI models to come with a confidence guarantee. We want to be able to make statements such as “This prediction provides 85% confidence that catalyst A is better than catalyst B, based on your requirements.'”

In their published study, the researchers chose to benchmark their uncertainty method with one of the most advanced foundation models for atomistic materials chemistry, called MACE. The researchers calculated how well the model is trained to calculate the energy of specific families of materials.

These calculations are important to understanding how well the AI model can approximate the more time- and energy-intensive methods that run on supercomputers. The results show what kinds of simulations can be calculated with confidence that the answers are accurate.

This kind of trust and confidence in predictions is crucial to realizing the potential of incorporating AI workflows into everyday laboratory work and the creation of autonomous laboratories where AI becomes a trusted lab assistant, the researchers added.

“We have worked to make it possible to ‘wrap’ any neural network potentials for chemistry into our framework,” said Choudhury. “Then in a SNAP, they suddenly have the power of being uncertainty aware.”

Now, if only puppies could be trained in a snap.

More information:
Jenna A. Bilbrey et al, Uncertainty quantification for neural network potential foundation models, npj Computational Materials (2025). DOI: 10.1038/s41524-025-01572-y

Provided by
Pacific Northwest National Laboratory


Citation:
A new way to measure uncertainty provides an important step toward confidence in AI model training (2025, April 24)
retrieved 24 April 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Combining electrical and force signals boosts prosthetic hand accuracy
Tech

Combining electrical and force signals boosts prosthetic hand accuracy

Graduate student Peyton Young works with a robotic arm controlled by electromyography...

Diagram-based language streamlines optimization of complex coordinated systems
Tech

Diagram-based language streamlines optimization of complex coordinated systems

Coordinating complicated interactive systems, whether it’s the different modes of transportation in...