In a research paper published last January 2023 in the prestigious IEEE Transactions on Industrial Informatics journal, Dr. Karl Ezra Pilario and his co-researchers in South Korea have developed a new framework that uses explainable artificial intelligence (XAI) methods to accurately diagnose abnormal events in large industrial processes.
According to IBM, the term “XAI” refers to a set of methods that allow humans to understand the predictions created by machine learning models. These methods can help to build trust in AI systems that are now being integrated into the day-to-day operations of industrial plants.
The research team first trained a deep-learning based anomaly detector called the Adversarial Auto-Encoder (AAE) on thousands of data points representing normal process measurements. The AAE aims to generate an accurate health indicator, a single number that represents the overall condition of the entire plant at every instant automatically. The researchers then built an XAI method called Shapley additive explanations (SHAP) on top of the AAE so that any significant fluctuation in the health indicator can be attributed to local process variables. Hence, the abnormal signatures on each process variable can now be associated to a specific fault type.
When tested on a range of various fault scenarios in two case studies, namely a continuous stirred-tank reactor (CSTR) and the Tennessee Eastman Process (TEP) challenge, the proposed method was able to diagnose each fault more accurately than previous methods. The faults that were studied include catalyst decay, heat transfer fouling, feed disturbances, sensor faults, reactor overpressure, valve stiction, among many others. Researchers have proven that XAI techniques and deep learning models can be very effective for large-scale fault detection and diagnosis.
The potential applications of this research are significant. It can help plant operators to quickly and effectively diagnose problems, take appropriate actions to address them, and prevent them from happening in the future.
“Explainable AI is now a must when it comes to applying deep learning models in industrial operations,” said Dr. Pilario. “By providing transparency into the model’s predictions, we can help make AI more accessible for plant operators and plant managers to use.”
Overall, the research paper provides a promising approach for using XAI to improve fault diagnosis in industrial processes. By combining machine learning algorithms with clear and understandable explanations, the XAI framework has the potential to improve safety, reduce downtime, and increase productivity in a wide range of industries. The research team is now working on extending the proposed framework to other deep-learning models and further improve its capabilities, for instance, towards data-driven fault prognosis and predictive maintenance.
To know more about the details of this paper, access the link: https://doi.org/10.1109/TII.2023.3240601