File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

SENOCAKARDA

Senocak, Arda
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Toward Interactive Sound Source Localization: Better Align Sight and Sound!

Author(s)
Senocak, ArdaRyu, HyeonggonKim, JunsikOh, Tae-HyunPfister, HanspeterChung, Joon Son
Issued Date
2025-09
DOI
10.1109/TPAMI.2025.3573994
URI
https://scholarworks.unist.ac.kr/handle/201301/87862
Citation
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.47, no.9, pp.7643 - 7659
Abstract
Recent studies on learning-based sound source localization have primarily focused on localization performance. However, prior work and existing benchmarks often overlook a crucial aspect: cross-modal interaction, which is essential for interactive sound source localization. This interaction is vital for understanding semantically matched or mismatched audio-visual events, such as silent objects or true sound sources among multiple objects. In this work, we comprehensively examine the cross-modal interaction of existing methods, benchmarks, evaluation metrics, and cross-modal understanding tasks. We identify the overlooked points of previous studies and make several contributions to address them. First, we propose a learning framework that incorporates retrieval-based and hand-crafted augmentation techniques, enhancing cross-modal interaction through cross-modal alignment. Second, we introduce new evaluation metrics to accurately and rigorously assess localization methods, focusing on both localization performance and cross-modal interaction. Third, to thoroughly analyze interactive sound source localization, we present a new semi-synthetic benchmark with diverse categorical combinations. Finally, we evaluate both interactive sound source localization and auxiliary cross-modal retrieval tasks, benchmarking competing methods alongside our own. Our new benchmark and evaluation metrics reveal that previous methods struggle with interactive sound source localization tasks, largely due to their limited cross-modal interaction capabilities. Our method, which features enhanced cross-modal alignment, demonstrates superior sound source localization and cross-modal interaction performance. This work provides the most comprehensive analysis of sound source localization to date, with extensive validation of competing methods on both existing and new benchmarks using both new and standard evaluation metrics.
Publisher
IEEE COMPUTER SOC
ISSN
1939-3539
Keyword (Author)
Benchmark testingVisualizationMeasurementSemanticsContrastive learningCross modal retrievalRepresentation learningTrainingDogsAudio-visual learningsound source localizationself-supervisionmulti-modal learningcross-modal retrievalLocation awareness

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.