File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김태환

Kim, Taehwan
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Miami, FL -
dc.citation.endPage 124 -
dc.citation.startPage 119 -
dc.citation.title Automatic Speech Recognition and Understanding/Spoken Language Technology -
dc.contributor.author Kim, Taehwan -
dc.contributor.author Livescu, Karen -
dc.contributor.author Shakhnarovich, Gregory -
dc.date.accessioned 2023-12-20T01:37:52Z -
dc.date.available 2023-12-20T01:37:52Z -
dc.date.created 2021-09-01 -
dc.date.issued 2012-12 -
dc.description.abstract We study the recognition of fingerspelling sequences in American Sign Language from video using tandem-style models, in which the outputs of multilayer perceptron (MLP) classifiers are used as observations in a hidden Markov model (HMM)-based recognizer. We compare a baseline HMM-based recognizer, a tandem recognizer using MLP letter classifiers, and a tandem recognizer using MLP classifiers of phonological features. We present experiments on a database of fingerspelling videos. We find that the tandem approaches outperform an HMM-based baseline, and that phonological feature-based tandem models outperform letter-based tandem models. -
dc.identifier.bibliographicCitation Automatic Speech Recognition and Understanding/Spoken Language Technology, pp.119 - 124 -
dc.identifier.doi 10.1109/SLT.2012.6424208 -
dc.identifier.scopusid 2-s2.0-84874284869 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/53841 -
dc.language 영어 -
dc.publisher IEEE -
dc.title American sign language fingerspelling recognition with phonological feature-based tandem models -
dc.type Conference Paper -
dc.date.conferenceDate 2012-12-02 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.