Summary:
Hernandez-Rebollar et al. have a two part paper: they present a glove (AcceleGlove) and they have a test platform for the glove that uses decision trees.
The AcceleGlove contains 5 accelerometer sensors placed at the middle joint of fingers. Each accelerometer has x and y angles that can be measured, ending in a total of 10 sensor readings every 10 milliseconds. The raw data matrix consisting of just the x and y values is transformed into a separate Xg, Yg, and yi values. Xg (x global) measures the finger orientation, roll, and spread. Yg measures the finger bentness of the hand. The third component classifies the hand into three values: closed, horizontal, and vertical. This third component is actually only the index finger's y-component (only in the ASL letters 'F' and 'D' is the index finger not accurate for this measurement).
To classify a posture/gesture, the decision tree first breaks up the letters into vertical, horizontal, and closed. Then the gestures are classified further as rolled, flat, pinky up, and these sections then recognize between the actual letters.
They mention a 100% recognition rate for 21 gestures, with 78% being the worst gesture accuracy.
Discussion:
I like this paper for 2 main reasons:
- There are no HMMs
- They did not use a CyberGlove
I'm curious as to how well the glove they designed can work with gestures instead of postures. The glove polls each accelerometer sequentially, which could be a problem with very quick gestures. This issue is probably not too important, but it might provide slightly more error than a batch poll.
I'm also curious as to how they designed their decision tree. The intuition behind the partitioning is not made clear, except for the main partition of open/close/horizontal.
3 comments:
yeah it was nice to see a non-hmm paper, but i agree that a more sophisticated algorithm would have been good to see. that would have also eliminated the need for dimensionality reduction (and hence information lost) and may have ultimately yielded better classification rates.
I was surprised to see a different technique used. I was even more surprised that it involved decision trees. It was nice that the paper showed decent results for decision trees, even for the type of structure I don't think it was designed for.
I personally think they need a bit finer grained features for more accuracy, especially distinguishing UVR. The decision tree was not explained well, as you mention. I wonder if they used something like ID5 or C4.5 to train it, or if the nodes were partitioned manually.
Post a Comment