When an abnormal event occurs in a system in a nuclear power plant (NPP), it can cause severe safety problems if it is not mitigated. Therefore, an operator diagnoses the abnormality from alarms and monitoring parameters and conducts appropriate action. Among the tasks, diagnosis can increase the workload of the operator because it should be accurately performed as soon as possible to minimize the consequence of the occurred event. Recently, to support the diagnosis task, operator support systems using an artificial neural network (ANN) have developed. However, an ANN which is a black-box model cannot logically infer its prediction. For this reason, an operator cannot back up a misdiagnosis of the model, and they also cannot trust its diagnosis. For this issue, we intend to provide evidence with the diagnosis of the NPP abnormality classification model. To find more appropriate evidence of the NPP state diagnosis, this study verifies the improvement of interpretability when Guided Backpropagation is used with the explanation method. A convolutional neural network that can classify each NPP abnormal state with high accuracy is used as a diagnosis model, and the model calculates each classification contribution of plant parameters in input data using explanation methods. The interpretability of each method is compared by reclassifying the NPP states using each dataset composed of high relevant parameters from calculation results. By making the model more transparent, operators can trust model diagnosis.