Summary:
Brashear et al. use GT2k to create an American Sign Language game for deaf children. The system, called CopyCat, teaches language skills to children by having them sign various sentences to interact with a game environment.
A Wizard of Oz study was used to gather data and design their interface. A desk, mouse, and chair was used in the study, along with a pink glove. The students pushed a button and then signed a gesture, and the data was collected using the glove and an IEEE 1394 video camera. The users were 9- to 11-year-olds.
The hand is pulled from the video image by its bright color. The image pixel data is converted to a HSV color space histogram, which is used to binarize the data and find the hand. Accelerometers are also used to track hand movement in x, y, and z positions.
The data from five children was analyzed for user-dependent and -independent models. User-dependence was validated in a 90/10 (training/testing) split, with word accuracy in the low 90s and and sentence accuracy around 70%. The standard deviation for the sentence accuracy is very high, with approximately at 12% deviation.
User-independent models were lower with an average word accuracy of 86.6% and a sentence accuracy of 50.64%.
Discussion:
I like the author's user study with the Wizard of Oz to collect real-world data from children. The system's performance (in essence, GT2k's performance) was very low with sentences, which indicates that segmentation is the largest issue with the toolkit. I'm also worried about the 90/10 split for the user dependent models. That is a huge ratio of training to testing data, and it might be skewing the results to show higher than normal accuracy.
Wednesday, February 27, 2008
American Sign Language Recognition in Game Development for Deaf Children
Labels:
gesture,
glove,
hand gesture,
HMM,
sign language,
user study,
vision
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment