Graduation Year
2005
Document Type
Thesis
Degree
M.S.C.S.
Degree Granting Department
Computer Science
Major Professor
Sudeep Sarkar, Ph.D.
Committee Member
Barbara Loeding, Ph.D.
Committee Member
Rangachar Kasturi, Ph.D.
Keywords
Sign language, Gestures, Space of relational distributions, Learning, Principal component analysis
Abstract
The common practice in sign language recognition is to first construct individual sign models, in terms of discrete state transitions, mostly represented using Hidden Markov Models, from manually isolated sign samples and then to use them to recognize signs in continuous sentences. In this thesis we use a continuous state space model, where the states are based on purely image-based features, without the use of special gloves. We also present an unsupervised approach to both extract and learn models for continuous basic units of signs, which we term as signemes, from continuous sentences. Given a set of sentences with a common sign, we can automatically learn the model for part of the sign,or signeme, that is least affected by coarticulation effects. We tested our idea using the publicly available Boston SignStreamDataset by building signeme models of 18 signs. We test the quality of the models by considering how well we can localize the sign in a new sentence. We also present the concept of smooth continuous curve based models formed using functional splines and curve registration. We illustrate this idea using 16 signs.
Scholar Commons Citation
Nayak, Sunita, "A Vision-Based Approach For Unsupervised Modeling Of Signs Embedded In Continuous Sentences" (2005). USF Tampa Graduate Theses and Dissertations.
https://digitalcommons.usf.edu/etd/788