Wednesday, February 27, 2008

A Method for Recognizing a Sequence of Sign Language Words Represented in a Japanese Sign Language Sentence

Summary:

Sagawa and Takeuchi created a Japanese Sign Language recognition system that uses "rule-based matching" and segments gestures based on hand velocity and direction changes.

There are thresholds of direction vector changes that account for the segmentation. There are also issues to determine which hand (or both) are being used for gestures, and these are determined by the direction and velocity change thresholds.

The system achieved 86.6% accuracy for signed words, and 58% accuracy for signed sentences.


Discussion:

There's not much to discuss with this paper. The "nugget" of research is with the use of direction and velocity changes to segment the gestures. I became more interested in this paper since I learned it was published a year before Sezgin's, but not by much.

No comments: