File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김태환

Kim, Taehwan
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace CC -
dc.citation.conferencePlace Shanghai -
dc.citation.endPage 6164 -
dc.citation.startPage 6160 -
dc.citation.title IEEE International Conference on Acoustics, Speech and Signal Processing -
dc.contributor.author Kim, Taehwan -
dc.contributor.author Wang, Weiran -
dc.contributor.author Tang, Hao -
dc.contributor.author Livescu, Karen -
dc.date.accessioned 2023-12-19T21:08:22Z -
dc.date.available 2023-12-19T21:08:22Z -
dc.date.created 2021-09-01 -
dc.date.issued 2016-03 -
dc.description.abstract We study the problem of recognition of fingerspelled letter sequences in American Sign Language in a signer-independent setting. Fingerspelled sequences are both challenging and important to recognize, as they are used for many content words such as proper nouns and technical terms. Previous work has shown that it is possible to achieve almost 90% accuracies on fingerspelling recognition in a signer-dependent setting. However, the more realistic signer-independent setting presents challenges due to significant variations among signers, coupled with the dearth of available training data. We investigate this problem with approaches inspired by automatic speech recognition. We start with the best-performing approaches from prior work, based on tandem models and segmental conditional random fields (SCRFs), with features based on deep neural network (DNN) classifiers of letters and phonological features. Using DNN adaptation, we find that it is possible to bridge a large part of the gap between signer-dependent and signer-independent performance. Using only about 115 transcribed words for adaptation from the target signer, we obtain letter accuracies of up to 82.7% with framelevel adaptation labels and 69.7% with only word labels. -
dc.identifier.bibliographicCitation IEEE International Conference on Acoustics, Speech and Signal Processing, pp.6160 - 6164 -
dc.identifier.doi 10.1109/ICASSP.2016.7472861 -
dc.identifier.issn 1520-6149 -
dc.identifier.scopusid 2-s2.0-84973333869 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/53838 -
dc.language 영어 -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title Signer-independent fingerspelling recognition with deep neural network adaptation -
dc.type Conference Paper -
dc.date.conferenceDate 2016-03-20 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.