File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김성일

Kim, Sungil
Data Analytics Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Multi-modal Lifelog Data Fusion for Improved Human Activity Recognition: A Hybrid Approach

Author(s)
Oh, YongKyungKim, Sungil
Issued Date
2024-10
DOI
10.1016/j.inffus.2024.102464
URI
https://scholarworks.unist.ac.kr/handle/201301/82331
Citation
INFORMATION FUSION, v.110, pp.102464
Abstract
The rapid growth of lifelog data, collected through smartphones and wearable devices, has driven the need for better Human Activity Recognition (HAR) solutions. However, lifelog data is complex and challenging to analyze due to its diverse sources of information. In response, we introduce an innovative hybrid data fusion framework for HAR. This framework comprises three key elements: a hybrid fusion mechanism, an attentionbased classifier, and an ensemble -based recognition approach. Our hybrid fusion mechanism expertly combines the advantages of late and intermediate fusion, enhancing classification performance and improving the network's ability to learn connections between different data modalities. Additionally, our solution incorporates an attention -based classifier and an ensemble approach, ensuring robust and consistent performance in real -world scenarios. We evaluated our method across multiple public lifelog datasets, demonstrating that our hybrid fusion approach consistently surpasses existing fusion strategies in HAR, promising significant advancements in activity recognition.
Publisher
ELSEVIER
ISSN
1566-2535
Keyword (Author)
Multi-modal dataData fusion strategyHybrid approachHuman activity recognition
Keyword
OF-THE-ARTINFORMATION FUSIONDEEPCHALLENGES

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.