New research from Pacific Northwest National Laboratory (PNNL) uses machine learning, data analytics, and artificial intelligence to identify potential nuclear threats.
PNNL nonproliferation analyst Benjamin Wilson has a unique opportunity to combine these data mining and machine learning techniques with nuclear analysis.
According to Wilson, “Preventing nuclear proliferation requires vigilance. Labor is involved, from auditing nuclear materials to investigating those who handle nuclear materials. Data analytics-based technologies can make this easier.”
With support from the National Nuclear Security Administration (NNSA), the Mathematics for Artificial Reasoning in Science (MARS) initiative, and the Department of Defense, PNNL researchers are working on several projects to improve the effectiveness of nuclear nonproliferation and security measures. Below are some of the key papers.
Nuclear material leak detection
A nuclear reprocessing facility is a facility that collects spent nuclear fuel and separates it into waste. The products are then used to produce compounds that can be processed into new fuel for nuclear reactors. This compound contains uranium and plutonium and can be used to make nuclear weapons. The IAEA monitors nuclear facilities to ensure that nuclear material is not used in nuclear weapons. This is a long-term routine examination and collection of samples for further analysis.
“If we can create a system that automatically detects anomalies in facility process data, we can save a lot of time and labor costs,” Wilson said.
In a study published in The International Journal of Nuclear Safeguards and Non-Proliferation, Wilson worked with researchers at Sandia National Laboratories to create a virtual replica of a reprocessing facility. He then trained an artificial intelligence model to detect patterns in process data that might indicate a nuclear material leak. In this simulation environment, the model showed encouraging results. “Although it is unlikely that this approach will be used in the near future, our system offers a promising start to complement existing safeguards,” Wilson said.
Analyze text to find signs of nuclear weapons proliferation
PNNL data scientists developed a machine learning tool based on Google BERT, a language model trained on Wikipedia data for common queries. Language models allow computers to “understand” human language. This means you can read text and extract important information, including context and nuance, from it. People may ask BERT questions like “What is the population of Switzerland?” And get the correct answer.
The model learned from Wikipedia is excellent at answering general questions, but lacks knowledge in the nuclear domain. So the team created AJAX, a helper to fill this knowledge gap.
“AJAX is still in its infancy, but it has the potential to save analysts a lot of time by providing both direct answers to queries and evidence for those answers,” Subramanian said. This evidence may be of particular interest to researchers because most machine learning models are often referred to as “black boxes,” leaving no evidence of their answers even if they are correct. AJAX aims to provide auditability by receiving documents containing evidence.
According to Subramanian, “When a domain is as important as nuclear proliferation detection, it is important to know where the information comes from.”
This development was published in the International Journal of Nuclear Security and Non-Proliferation.
Currently, IAEA analysts spend a lot of time reading research papers and manually analyzing large amounts of data containing information about nuclear proliferation. In the future, researchers hope to be able to ask AJAX questions and get not only answers but also links to information sources. This will greatly simplify the analyst’s work.
Image analysis to determine the origin of nuclear materials
Sometimes law enforcement officials discover nuclear material that is outside regulatory control and of unknown origin. It is very important to find out where the material comes from and where it was created. Ultimately, there is always the possibility that the sample extracted is part of illicitly distributed material. Forensic analysis of nuclear material is one of the analytical tools used in this important task.
PNNL researchers, in collaboration with the University of Utah, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, developed machine learning algorithms for forensic analysis of these samples. Their method uses electron microscopy images to compare the ultrastructure of nuclear samples. Different materials have subtle differences that can be detected using machine learning.
“Imagine that synthesizing nuclear material is like baking cookies,” said Elizabeth Jurrus, director of the MARS initiative. “Two people can use the same recipe and make cookies that look different,” she says. “The same goes for nuclear materials.”
The synthesis of these materials can be affected by several factors, such as local humidity and purity of the starting materials. As a result, nuclear material produced by a particular company has a special structure called a “characteristic appearance” that can be seen under an electron microscope.
This study was published in the Journal of Nuclear Materials.
The researchers created an image library of various nuclear samples. They used machine learning to compare images from the library to unknown samples to determine the origin of the unknown samples.
This will help nuclear analysts determine the source of the material and direct further research.
It will likely be some time before agencies such as the IAEA begin incorporating machine learning technology into their nuclear threat detection methods. However, such research can certainly influence and optimize this process.
“We don’t expect machine learning to replace anyone’s job, but we see it as a way to make their job easier,” the researchers say. “We can use machine learning to identify critical information so analysts can focus on the information that matters most.”