File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

MarcoComuzzi

Comuzzi, Marco
Intelligent Enterprise Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Explainable predictive process monitoring: a user evaluation

Author(s)
Rizzi, WilliamsComuzzi, MarcoDi Francescomarino, ChiaraGhidini, ChiaraLee, SuhwanMaggi, Fabrizio MariaNolte, Alexander
Issued Date
2024-10
DOI
10.1007/s44311-024-00003-3
URI
https://scholarworks.unist.ac.kr/handle/201301/85409
Citation
Process Science , v.1, pp.3
Abstract
Explainability is motivated by the lack of transparency of black-box machine learning approaches, which do not foster trust and acceptance of machine learning algorithms. This also happens in the predictive process monitoring field, where predictions, obtained by applying machine learning techniques, need to be explained to users, so as to gain their trust and acceptance. In this work, we carry on a user evaluation on explanation approaches for predictive process monitoring aiming at investigating whether and how the explanations provided (i) are understandable; (ii) are useful in decision making tasks; (iii) can be further improved for process analysts with different predictive process monitoring expertise levels. The results of the user evaluation show that, although explanation plots are overall understandable and useful for decision making tasks for business process management users — with and without experience in predictive process monitoring — differences exist in the comprehension and usage of different plots, as well as in the way users with different predictive process monitoring expertise understand and use them.
Publisher
Springer Nature
ISSN
2948-2178

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.