File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

공태식

Gong, Taesik
Ubiquitous AI Lab
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 8667 -
dc.citation.number 9 -
dc.citation.startPage 8656 -
dc.citation.title IEEE TRANSACTIONS ON MOBILE COMPUTING -
dc.citation.volume 24 -
dc.contributor.author Yoon, Hyungjun -
dc.contributor.author Cha, Hyeongheon -
dc.contributor.author Nguyen, Hoang C. -
dc.contributor.author Gong, Taesik -
dc.contributor.author Lee, Sung-Ju -
dc.date.accessioned 2025-09-29T09:30:07Z -
dc.date.available 2025-09-29T09:30:07Z -
dc.date.created 2025-09-26 -
dc.date.issued 2025-09 -
dc.description.abstract Pre-training representations acquired via self-supervised learning could achieve high accuracy on even tasks with small training data. Unlike in vision and natural language processing domains, pre-training for IMU-based applications is challenging, as there are few public datasets with sufficient size and diversity to learn generalizable representations. To overcome this problem, we propose IMG2IMU that adapts pre-trained representation from large-scale images to diverse IMU sensing tasks. We convert the sensor data into visually interpretable spectrograms for the model to utilize the knowledge gained from vision. We further present a sensor-aware pre-training method for images that enables models to acquire particularly impactful knowledge for IMU sensing applications. This involves using contrastive learning on our augmentation set customized for the properties of sensor data. Our evaluation with four different IMU sensing tasks shows that IMG2IMU outperforms the baselines pre-trained on sensor data by an average of 9.6%p F1-score, illustrating that vision knowledge can be usefully incorporated into IMU sensing applications where only limited training data is available. -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON MOBILE COMPUTING, v.24, no.9, pp.8656 - 8667 -
dc.identifier.doi 10.1109/TMC.2025.3556998 -
dc.identifier.issn 1536-1233 -
dc.identifier.scopusid 2-s2.0-105001968910 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/88117 -
dc.identifier.wosid 001547970100026 -
dc.language 영어 -
dc.publisher IEEE COMPUTER SOC -
dc.title From Vision to Motion: Translating Large-Scale Knowledge for Data-Scarce IMU Applications -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems; Telecommunications -
dc.relation.journalResearchArea Computer Science; Telecommunications -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Data models -
dc.subject.keywordAuthor Human activity recognition -
dc.subject.keywordAuthor Adaptation models -
dc.subject.keywordAuthor Visualization -
dc.subject.keywordAuthor Contrastive learning -
dc.subject.keywordAuthor Training data -
dc.subject.keywordAuthor Translation -
dc.subject.keywordAuthor Training -
dc.subject.keywordAuthor Mobile sensing -
dc.subject.keywordAuthor Sensors -
dc.subject.keywordAuthor Spectrogram -
dc.subject.keywordAuthor deep learning -
dc.subject.keywordAuthor self-supervised learning -
dc.subject.keywordAuthor contrastive learning -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.