File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김태환

Kim, Taehwan
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Lexicon-free fingerspelling recognition from video: Data, models, and signer adaptationI

Author(s)
Kim, TaehwanKeane, JonathanWang, WeiranTang, HaoRiggle, JasonShakhnarovich, GregoryBrentari, DianeLivescu, Karen
Issued Date
2017-11
DOI
10.1016/j.csl.2017.05.009
URI
https://scholarworks.unist.ac.kr/handle/201301/53795
Fulltext
https://www.sciencedirect.com/science/article/pii/S0885230816302868?via%3Dihub
Citation
COMPUTER SPEECH AND LANGUAGE, v.46, pp.209 - 232
Abstract
We study the problem of recognizing video sequences of fingerspelled letters in American Sign Language (ASL). Fingerspelling comprises a significant but relatively understudied part of ASL. Recognizing fingerspelling is challenging for a number of reasons: it involves quick, small motions that are often highly coarticulated; it exhibits significant variation between signers; and there has been a dearth of continuous fingerspelling data collected. In this work we collect and annotate a new data set of continuous fingerspelling videos, compare several types of recognizers, and explore the problem of signer variation. Our best-performing models are segmental (semi-Markov) conditional random fields using deep neural network-based features. In the signer dependent setting, our recognizers achieve up to about 92% letter accuracy. The multi-signer setting is much more challenging, but with neural network adaptation we achieve up to 83% letter accuracies in this setting.
Publisher
ACADEMIC PRESS LTD- ELSEVIER SCIENCE LTD
ISSN
0885-2308
Keyword (Author)
American Sign LanguageFingerspelling recognitionSegmental modelDeep neural networkAdaptation
Keyword
LANGUAGE RECOGNITIONASL

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.