Nuclear power plant (NPP) operators must diagnose abnormal events accurately and promptly to prevent reactor trips and ensure safety. While artificial intelligence (AI) models can support operators in this complex task, their limited explainability often create trust issues. This study proposes a novel approach to validate AI diagnostic results using Explainable AI, specifically Layer-wise Relevance Propagation with epsilon, to identify whether the model's explanations are consistent with the diagnosed event. By introducing a sub-model trained on explanation patterns, the approach can detect potentially incorrect diagnoses where explanations differ from expected patterns. Case studies using NPP simulator data demonstrated that the proposed method successfully identified approximately 80 % of misdiagnosed cases as untrustworthy, effectively reducing operator confusion from model errors. This validation process enhances both NPP safety and operator trust in AI diagnostic systems by providing an independent verification mechanism for model outputs, thereby reducing the risk of operators relying on incorrect diagnostic information. The proposed approach represents a significant step forward in making AI-based diagnostic systems more reliable and trustworthy in safety-critical nuclear applications.