Ask Your Question
1

Hidden Markov Model test in python

asked 2012-01-23 10:49:29 -0500

this post is marked as community wiki

This post is a wiki. Anyone with karma >750 is welcome to improve it.

hello! for winter holiday i bought a kinect from microsoft and prime sense. now i want to do some gesture recognition with hidden markov model in python (my best language). i've found ghmm that seems to be quite good library but no more manteined.

i've read that sage include also ghmm so i'm going to try to ask here.

i've found this code in c# in [1] and i want to rewrite this simple test in python with ghmm.

--------------- c# code

int[][] sequences = new int[][]
{
   new int[] { 0,1,1,1,1,0,1,1,1,1 },
   new int[] { 0,1,1,1,0,1,1,1,1,1 },
   new int[] { 0,1,1,1,1,1,1,1,1,1 },
   new int[] { 0,1,1,1,1,1         },
   new int[] { 0,1,1,1,1,1,1       },
   new int[] { 0,1,1,1,1,1,1,1,1,1 },
   new int[] { 0,1,1,1,1,1,1,1,1,1 },
};
// For us, it can be obvious to see that the system is outputting sequences
// that always start with a zero and have one or more ones at the end.
// But lets try to fit a Hidden Markov Model to predict those sequences.


// Creates a new Hidden Markov Model with 3 states for
//  an output alphabet of two characters (zero and one)
HiddenMarkovModel hmm = new HiddenMarkovModel(2, 3);

// Try to fit the model to the data until the difference in
//  the average likelihood changes only by as little as 0.0001
hmm.Learn(sequences, 0.0001);

// Calculate the probability that the given
//  sequences originated from the model
double l1 = hmm.Evaluate(new int[] { 0,1 });      // 0.9999
double l2 = hmm.Evaluate(new int[] { 0,1,1,1 });  // 0.9166

double l3 = hmm.Evaluate(new int[] { 1,1 });      // 0.0000
double l4 = hmm.Evaluate(new int[] { 1,0,0,0 });  // 0.0000

double l5 = hmm.Evaluate(new int[] { 0,1,0,1,1,1,1,1,1 }); // 0.0342
double l6 = hmm.Evaluate(new int[] { 0,1,1,1,1,1,1,0,1 }); // 0.0342

my first try in python code and GHMM library: --------------- python code

from ghmm import *

training2 = [
   [ 0,1,1,1,1,1,1 ],
   [ 0,1,1,1 ],
   [ 0,1,1,1,1 ],
   [ 0,1, ],
   [ 0,1,1 ] ]

sigma = IntegerRange(0,2)
# null or random values, parameters should be overwritten by baum-welch!
A = [[0, 0, 0],[0, 0, 0], [0,0, 0]]
efair = [1.0/2] * 3
eloaded = [0.1, 0.1, 0.1]
B = [[0,1],[0,1],[1,0]]
pi = [0.3333] * 3

m = HMMFromMatrices(sigma, DiscreteDistribution(sigma), A, B, pi)
print m

train = SequenceSet(sigma, training2)
print train

m.baumWelch(train)
print m

tests = [
    [0,1],
    [0,1,1,1],
    [1,1],
    [1,0,0,0],
    [0,1,0,1,1,1,1,1,1],
    [0,1,1,1,1,1,1,0,1]
]

for t in tests:
    s = SequenceSet(sigma, [t])
    #print '-------- loglik', m.loglikelihood(s)
    print '-------- forward ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2012-01-24 01:17:23 -0500

this post is marked as community wiki

This post is a wiki. Anyone with karma >750 is welcome to improve it.

We used to include ghmm, but William rewrote the HMM code a while ago so that we don't depend on ghmm anymore. Please see http://www.sagemath.org/doc/reference/sage/stats/hmm/hmm.html

I also put up a simple example here: http://sage.cs.drake.edu/home/pub/96/

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2012-01-23 10:49:29 -0500

Seen: 1,393 times

Last updated: Jan 24 '12