File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

임정호

Im, Jungho
Intelligent Remote sensing and geospatial Information Science Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 7 -
dc.citation.title REMOTE SENSING -
dc.citation.volume 12 -
dc.contributor.author Lee, Junghee -
dc.contributor.author Han, Daehyeon -
dc.contributor.author Shin, Minso -
dc.contributor.author Im, Jungho -
dc.contributor.author Lee, Junghye -
dc.contributor.author Quackenbush, Lindi J. -
dc.date.accessioned 2023-12-21T17:41:01Z -
dc.date.available 2023-12-21T17:41:01Z -
dc.date.created 2020-06-29 -
dc.date.issued 2020-04 -
dc.description.abstract This study compares some different types of spectral domain transformations for convolutional neural network (CNN)-based land cover classification. A novel approach was proposed, which transforms one-dimensional (1-D) spectral vectors into two-dimensional (2-D) features: Polygon graph images (CNN-Polygon) and 2-D matrices (CNN-Matrix). The motivations of this study are that (1) the shape of the converted 2-D images is more intuitive for human eyes to interpret when compared to 1-D spectral input; and (2) CNNs are highly specialized and may be able to similarly utilize this information for land cover classification. Four seasonal Landsat 8 images over three study areas-Lake Tapps, Washington, Concord, New Hampshire, USA, and Gwangju, Korea-were used to evaluate the proposed approach for nine land cover classes compared to several other methods: Random forest (RF), support vector machine (SVM), 1-D CNN, and patch-based CNN. Oversampling and undersampling approaches were conducted to examine the effect of the sample size on the model performance. The CNN-Polygon had better performance than the other methods, with overall accuracies of about 93%-95 % for both Concord and Lake Tapps and 80%-84% for Gwangju. The CNN-Polygon particularly performed well when the training sample size was small, less than 200 per class, while the CNN-Matrix resulted in similar or higher performance as sample sizes became larger. The contributing input variables to the models were carefully analyzed through sensitivity analysis based on occlusion maps and accuracy decreases. Our result showed that a more visually intuitive representation of input features for CNN-based classification models yielded higher performance, especially when the training sample size was small. This implies that the proposed graph-based CNNs would be useful for land cover classification where reference data are limited. -
dc.identifier.bibliographicCitation REMOTE SENSING, v.12, no.7 -
dc.identifier.doi 10.3390/rs12071097 -
dc.identifier.issn 2072-4292 -
dc.identifier.scopusid 2-s2.0-85084261331 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/33030 -
dc.identifier.url https://www.mdpi.com/2072-4292/12/7/1097 -
dc.identifier.wosid 000537709600047 -
dc.language 영어 -
dc.publisher MDPI -
dc.title Different Spectral Domain Transformation for Land Cover Classification Using Convolutional Neural Networks with Multi-Temporal Satellite Imagery -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Remote Sensing -
dc.relation.journalResearchArea Remote Sensing -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor spectral curve transformation -
dc.subject.keywordAuthor convolutional neural network -
dc.subject.keywordAuthor sensitivity analysis -
dc.subject.keywordAuthor land cover classification -
dc.subject.keywordPlus SUPPORT VECTOR MACHINE -
dc.subject.keywordPlus RANDOM FOREST -
dc.subject.keywordPlus STATISTICAL COMPARISONS -
dc.subject.keywordPlus CLASSIFIERS -
dc.subject.keywordPlus SEGMENTATION -
dc.subject.keywordPlus MAP -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.