File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 9 -
dc.citation.startPage 4534 -
dc.citation.title APPLIED SCIENCES-BASEL -
dc.citation.volume 12 -
dc.contributor.author Kim, Yeongjun -
dc.contributor.author Lee, Harim -
dc.date.accessioned 2023-12-21T14:11:39Z -
dc.date.available 2023-12-21T14:11:39Z -
dc.date.created 2022-05-27 -
dc.date.issued 2022-05 -
dc.description.abstract In a disaster site, terrestrial communication infrastructures are often destroyed or malfunctioning, and hence it is very difficult to detect the existence of survivors in the site. At such sites, UAVs are rapidly emerging as an alternative to mobile base stations to establish temporary infrastructure. In this paper, a novel deep-learning-based multi-source detection scheme is proposed for the scenario in which an UAV wants to estimate the number of survivors sending rescue signals within its coverage in a disaster site. For practicality, survivors are assumed to use off-the-shelf smartphones to send rescue signals, and hence the transmitted signals are orthogonal frequency division multiplexing (OFDM)-modulated. Since the line of sight between the UAV and survivors cannot be generally secured, the sensing performance of existing radar techniques significantly deteriorates. Furthermore, we discover that transmitted signals of survivors are unavoidably aysnchronized to each other, and thus existing frequency-domain multi-source classification approaches cannot work. To overcome the limitations of these existing technologies, we propose a lightweight deep-learning-based multi-source detection scheme by carefully designing neural network architecture, input and output signals, and a training method. Extensive numerical simulations show that the proposed scheme outperforms existing methods for various SNRs under the scenario where synchronous and asynchronous transmission is mixed in a received signal. For almost all cases, the precision and recall of the proposed scheme is nearly one, even when users' signal-to-noise ratios (SNRs) are randomly changing within a certain range. The precision and recall are improved up to 100% compared to existing methods, confirming that the proposal overcomes the limitation of the existing works due to the asynchronicity. Moreover, for Intel(R) Core(TM) i7-6900K CPU, the processing time of our proposal for a case is 31.8 milliseconds. As a result, the proposed scheme provides a robust and reliable detection performance with fast processing time. This proposal can also be applied to any field that needs to detect the number of wireless signals in a scenario where synchronization between wireless signals is not guaranteed. -
dc.identifier.bibliographicCitation APPLIED SCIENCES-BASEL, v.12, no.9, pp.4534 -
dc.identifier.doi 10.3390/app12094534 -
dc.identifier.issn 2076-3417 -
dc.identifier.scopusid 2-s2.0-85129696786 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/58589 -
dc.identifier.url https://www.mdpi.com/2076-3417/12/9/4534 -
dc.identifier.wosid 000794720900001 -
dc.language 영어 -
dc.publisher MDPI -
dc.title Deep-Learning-Based Stream-Sensing Method for Detecting Asynchronous Multiple Signals -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Chemistry, Multidisciplinary; Engineering, Multidisciplinary; Materials Science, Multidisciplinary; Physics, Applied -
dc.relation.journalResearchArea Chemistry; Engineering; Materials Science; Physics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor UAV-based rescue system -
dc.subject.keywordAuthor deep-learning-based system -
dc.subject.keywordAuthor asynchronous stream sensing -
dc.subject.keywordPlus CHANNEL ESTIMATION -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.