File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

정두영

Jung, Dooyoung
Healthcare Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Machine learning prediction of anxiety symptoms in social anxiety disorder: utilizing multimodal data from virtual reality sessions

Author(s)
Park, Jin-HyunShin, Yu-BinJung, DooyoungHur, Ji-WonPack, Seung PilLee, Heon-JeongLee, HwaminCho, Chul-Hyun
Issued Date
2025-01
DOI
10.3389/fpsyt.2024.1504190
URI
https://scholarworks.unist.ac.kr/handle/201301/86155
Citation
FRONTIERS IN PSYCHIATRY, v.15, pp.1504190
Abstract
Introduction Machine learning (ML) is an effective tool for predicting mental states and is a key technology in digital psychiatry. This study aimed to develop ML algorithms to predict the upper tertile group of various anxiety symptoms based on multimodal data from virtual reality (VR) therapy sessions for social anxiety disorder (SAD) patients and to evaluate their predictive performance across each data type.Methods This study included 32 SAD-diagnosed individuals, and finalized a dataset of 132 samples from 25 participants. It utilized multimodal (physiological and acoustic) data from VR sessions to simulate social anxiety scenarios. This study employed extended Geneva minimalistic acoustic parameter set for acoustic feature extraction and extracted statistical attributes from time series-based physiological responses. We developed ML models that predict the upper tertile group for various anxiety symptoms in SAD using Random Forest, extreme gradient boosting (XGBoost), light gradient boosting machine (LightGBM), and categorical boosting (CatBoost) models. The best parameters were explored through grid search or random search, and the models were validated using stratified cross-validation and leave-one-out cross-validation.Results The CatBoost, using multimodal features, exhibited high performance, particularly for the Social Phobia Scale with an area under the receiver operating characteristics curve (AUROC) of 0.852. It also showed strong performance in predicting cognitive symptoms, with the highest AUROC of 0.866 for the Post-Event Rumination Scale. For generalized anxiety, the LightGBM's prediction for the State-Trait Anxiety Inventory-trait led to an AUROC of 0.819. In the same analysis, models using only physiological features had AUROCs of 0.626, 0.744, and 0.671, whereas models using only acoustic features had AUROCs of 0.788, 0.823, and 0.754.Conclusions This study showed that a ML algorithm using integrated multimodal data can predict upper tertile anxiety symptoms in patients with SAD with higher performance than acoustic or physiological data obtained during a VR session. The results of this study can be used as evidence for personalized VR sessions and to demonstrate the strength of the clinical use of multimodal data.
Publisher
FRONTIERS MEDIA SA
ISSN
1664-0640
Keyword (Author)
digital phenotypingdigital psychiatrysocial anxiety disordervirtual reality interventionanxiety predictionmachine learningmultimodal data
Keyword
HEART-RATE-VARIABILITYMETAANALYSISRESPONSESPHOBIAMODEL

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.