NUCLEAR ENGINEERING AND TECHNOLOGY, v.57, no.8, pp.103589
Abstract
In nuclear power plants, operators can face cognitive workloads when diagnosing abnormal events due to the need to monitor numerous parameters and consider hundreds of potential scenarios. Artificial intelligence technologies have been proposed to support this process by providing diagnostic results; however, their lack of transparency can lead to out-of-the-loop unfamiliarity and distrust, hindering effective decision-making. To address these challenges, this study introduces a novel concept to enhance the understandability and trustworthiness of diagnostic support systems through Explainable Artificial Intelligence (XAI). The first method in the proposed concept rearranges monitoring parameters based on system structures to reflect parameter relationships. The second method refines explanations from XAI using Multilevel Flow Modeling (MFM) to ensure consistency with physical flow, and it visualizes diagnostic cause components on a plant map. By filtering out incomprehensible information and visualizing intuitive diagnostic causes, the system enables operators to identify expected causes of diagnostic results directly on the NPP map at the component or system level. This approach provides explainable and comprehensible support information, fostering trust in the system and improving diagnostic efficiency in abnormal situations.