Thursday, January 24, 2008

An Introduction to Hidden Markov Models

Summary:

Rabiner and Juang's paper on Hidden Markov Models (HMMs) introduces the models, defines the three main problems associated with HMMs, and provides examples for utilizing HMMs.

HMMs are a time-dependent model that consist of observations and hidden states. As an example, the authors discuss possible coin flip models that can have coins of varying probability (states) and transitions that probabilistically determine which coin will be flipped. One person could continuously flip coins and record the data. Another person is only receiving the outcomes of the flips, i.e., O = O1, ..., OT,. The person flipping is hidden to the observer.

Rabiner and Juang define three main elements of HMMs as:

1) HMMs have a finite number of states, N
2) A "new" state is entered at time, t, depending on a given transition probability distribution.
3) Observable output is made after each transition, and this output depends on the current state.

The formal notation for an HMM is:

T = the time length of the observable sequences (i.e., how many observations seen)
N = the number of states
M = the number of observation symbols (if observations are discrete)
Q = the states {q1, q2, ... , qN}
V = the observations {v1, v2, ... , vM}
A = the state probability distribution {aij}, aij = P(qj at t + 1 | qi at t). The probability we are in qj given that we were in qi in the last timestep.
B = the observation symbol probability distribution in state j, {bj(k)}, bj(k) = P(vk at t | qj at t)
pi = initial state distribution, pij = P(qi at t = 1)

The three problems for HMMs are:

1) Given an observation sequence O = O1, ..., OT, and the

Solutions to these problems are presented in the paper, but mathematical symbols are difficult to represent in the blog, and many of the images used are illegible. Instead, I'll jump to the author's discussion of uses and issues.

One issue with HMMs is underflow, since the values at at(i) and Bt(i) approach zero very quickly (they are products of 0.0-1.0 probabilities). Another issue is how to actually build HMMs, i.e. what are the transitions and states?

HMMs are good for modeling sequential information where the current state relies only on the previous (or previous 2) states. These models, such as for isolated word recognition, are easy to build and not too computationally intensive. People usually do not insert random sounds into the middle of a word, so the probability distributions for these models are easy to build.


Discussion:

Overall the HMM paper is a good overview of HMMs. I really don't have much to say about this paper, except that I wish I had page 14 and I wish that the figures were readable.

As far as HMMs in hand gestures go, I have alway shied away from using HMMs because I feel that the power you get from them is offset by huge constraints and a large overhead with implementation issues and computation time. The class could theoretically model some types of sign gestures with HMMs, but I guess we'll see what data the class gets to see if any sorts of probability distributions present themselves.


1 comment:

Paul Taele said...

I think you put it best when you said this paper was a good "overview" on HMMs. That's how I would best describe this paper, because it gave me a pretty good idea of HMMs in general. I wish the paper gave more specifics on what HMMs can't do, as mentioned by Brandon and Josh in Paul #1 and Pankaj's blog post.