Article

# Hidden Markov Models in C#

, 5 Dec 2010
 Rate this:
Hidden Markov Models (HMM) are stochastic methods to model temporal and sequence data. They are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following and bioinformatics.

Hidden Markov Models (HMM) are stochastic methods to model temporal and sequence data. They are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.

This code has also been incorporated in Accord.NET Framework, which includes the latest version of this code plus many other statistics and machine learning tools.

## Contents

1. Introduction
2. Definition
3. Algorithms
4. Using the code
5. Remarks
6. Acknowledgements
8. References

## Introduction

Hidden Markov Models were first described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s. Indeed, one of the most comprehensive explanations on the topic was published in “A Tutorial On Hidden Markov Models And Selected Applications in Speech Recognition”, by Lawrence R. Rabiner in 1989. In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.

Dynamical systems of discrete nature assumed to be governed by a Markov chain emits a sequence of observable outputs. Under the Markov assumption, it is also assumed that the latest output depends only on the current state of the system. Such states are often not known from the observer when only the output values are observable.

Hidden Markov Models attempt to model such systems and allow, among other things, (1) to infer the most likely sequence of states that produced a given output sequence, to (2) infer which will be the most likely next state (and thus predicting the next output) and (3) calculate the probability that a given sequence of outputs originated from the system (allowing the use of hidden Markov models for sequence classification).

The “hidden” in Hidden Markov Models comes from the fact that the observer does not know in which state the system may be in, but has only a probabilistic insight on where it should be.

## Definition

Hidden Markov Models can be seem as finite state machines where for each sequence unit observation there is a state transition and, for each state, there is a output symbol emission.

### Notation

Traditionally, HMMs have been defined by the following quintuple:

$\lambda = (N, M, A, B, \pi)$

where

• N is the number of states for the model
• M is the number of distinct observations symbols per state, i.e. the discrete alphabet size.
• A is the NxN state transition probability distribution given in the form of a matrix A = {aij}
• B is the NxM observation symbol probability distribution given in the form of a matrix B = {bj(k)}
• π is the initial state distribution vector π = {πi}

Note that, if we opt out the structure parameters M and N we have the more often used compact notation

$\lambda = (A, B, \pi)$

### Canonical problems

There are three canonical problems associated with hidden Markov models, which I'll quote from Wikipedia:

1. Given the parameters of the model, compute the probability of a particular output sequence. This requires summation over all possible state sequences, but can be done efficiently using the Forward algorithm, which is a form of dynamic programming.
2. Given the parameters of the model and a particular output sequence, find the state sequence that is most likely to have generated that output sequence. This requires finding a maximum over all possible state sequences, but can similarly be solved efficiently by the Viterbi algorithm.
3. Given an output sequence or a set of such sequences, find the most likely set of state transition and output probabilities. In other words, derive the maximum likelihood estimate of the parameters of the HMM given a dataset of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum-Welch algorithm or the Baldi-Chauvin algorithm. The Baum-Welch algorithm is an example of a forward-backward algorithm, and is a special case of the Expectation-maximization algorithm.

The solution for those problems are exactly what makes Hidden Markov Models useful. The ability to learn from the data (using the solution of problem 3) and then become able to make predictions (solution to problem 2) and able to classify sequences (solution of problem 2) is nothing but applied machine learning. From this perspective, HMMs can just be seem as supervisioned sequence classifiers and sequence predictors with some other useful interesting properties.

### Choosing the structure

Choosing the structure for a hidden Markov model is not always obvious. The number of states depend on the application and to what interpretation one is willing to give to the hidden states. Some domain knowledge is required to build a suitable model and also to choose the initial parameters that an HMM can take. There is also some trial and error involved, and there are sometimes complex tradeoffs that have to be made between model complexity and difficulty of learning, just as is the case with most machine learning techniques.

Additional information can be found on http://www.cse.unsw.edu.au/~waleed/phd/tr9806/node12.html.

## Algorithms

The solution to the three canonical problems are the algorithms that makes HMMs useful. Each of the three problems are described in the three subsections below.

### Evaluation

The first canonical problem is the evaluation of the probability of a particular output sequence. It can be efficiently computed using either the Viterbi-forward or the Forward algorithms, both of which are forms of dynamic programming.

The Viterbi algorithm originally computes the most likely sequence of states which has originated a sequence of observations. In doing so, it is also able to return the probability of traversing this particular sequence of states. So to obtain Viterbi probabilities, please refer to the Decoding problem referred below.

The Forward algorithm, unlike the Viterbi algorithm, does not find a particular sequence of states; instead it computes the probability that any sequence of states has produced the sequence of observations. In both algorithms, a matrix is used to store computations about the possible state sequence paths that the model can assume. The forward algorithm also plays a key role in the Learning problem, and is thus implemented as a separate method.

/// <span class="code-SummaryComment"><summary>
</span>///   Calculates the probability that this model has generated the given sequence.
/// <span class="code-SummaryComment"></summary>
</span>/// <span class="code-SummaryComment"><remarks>
</span>///   Evaluation problem. Given the HMM  M = (A, B, pi) and  the observation
///   sequence O = {o1, o2, ..., oK}, calculate the probability that model
///   M has generated sequence O. This can be computed efficiently using the
///   either the Viterbi or the Forward algorithms.
/// <span class="code-SummaryComment"></remarks>
</span>/// <span class="code-SummaryComment"><param name="observations" />
</span>///   A sequence of observations.
///
/// <span class="code-SummaryComment"><param name="logarithm" />
</span>///   True to return the log-likelihood, false to return
///   the likelihood. Default is false.
///
/// <span class="code-SummaryComment"><returns>
</span>///   The probability that the given sequence has been generated by this model.
/// <span class="code-SummaryComment"></returns>
</span>public double Evaluate(int[] observations, bool logarithm)
{
if (observations == null)
throw new ArgumentNullException("observations");

if (observations.Length == 0)
return 0.0;

// Forward algorithm
double likelihood = 0;
double[] coefficients;

// Compute forward probabilities
forward(observations, out coefficients);

for (int i = 0; i < coefficients.Length; i++)
likelihood += Math.Log(coefficients[i]);

// Return the sequence probability
return logarithm ? likelihood : Math.Exp(likelihood);
}

/// <span class="code-SummaryComment"><summary>
</span>///   Baum-Welch forward pass (with scaling)
/// <span class="code-SummaryComment"></summary>
</span>/// <span class="code-SummaryComment"><remarks>
</span>///   Reference: http://courses.media.mit.edu/2010fall/mas622j/ProblemSets/ps4/tutorial.pdf
/// <span class="code-SummaryComment"></remarks>
</span>private double[,] forward(int[] observations, out double[] c)
{
int T = observations.Length;
double[] pi = Probabilities;
double[,] A = Transitions;

double[,] fwd = new double[T, States];
c = new double[T];

// 1. Initialization
for (int i = 0; i < States; i++)
c[0] += fwd[0, i] = pi[i] * B[i, observations[0]];

if (c[0] != 0) // Scaling
{
for (int i = 0; i < States; i++)
fwd[0, i] = fwd[0, i] / c[0];
}

// 2. Induction
for (int t = 1; t < T; t++)
{
for (int i = 0; i < States; i++)
{
double p = B[i, observations[t]];

double sum = 0.0;
for (int j = 0; j < States; j++)
sum += fwd[t - 1, j] * A[j, i];
fwd[t, i] = sum * p;

c[t] += fwd[t, i]; // scaling coefficient
}

if (c[t] != 0) // Scaling
{
for (int i = 0; i < States; i++)
fwd[t, i] = fwd[t, i] / c[t];
}
}

return fwd;
}


### Decoding

The second canonical problem is the discovery of the most likely sequence of states that generated a given output sequence. This can be computed efficiently using the Viterbi algorithm. A trackback is used to detect the maximum probability path travelled by the algorithm. The probability of travelling such sequence is also computed in the process.

/// <span class="code-SummaryComment"><summary>
</span>///   Calculates the most likely sequence of hidden states
///   that produced the given observation sequence.
/// <span class="code-SummaryComment"></summary>
</span>/// <span class="code-SummaryComment"><remarks>
</span>///   Decoding problem. Given the HMM M = (A, B, pi) and  the observation sequence
///   O = {o1,o2, ..., oK}, calculate the most likely sequence of hidden states Si
///   that produced this observation sequence O. This can be computed efficiently
///   using the Viterbi algorithm.
/// <span class="code-SummaryComment"></remarks>
</span>/// <span class="code-SummaryComment"><param name="observations" />A sequence of observations.
</span>/// <span class="code-SummaryComment"><param name="probability" />The state optimized probability.
</span>/// <span class="code-SummaryComment"><returns>The sequence of states that most likely produced the sequence.</returns>
</span>public int[] Decode(int[] observations, out double probability)
{
// Viterbi algorithm.

int T = observations.Length;
int states = States;
int minState;
double minWeight;
double weight;

int[,] s = new int[states, T];
double[,] a = new double[states, T];

// Base
for (int i = 0; i < states; i++)
{
a[i, 0] = (-1.0 * System.Math.Log(pi[i])) - System.Math.Log(B[i, observations[0]]);
}

// Induction
for (int t = 1; t < T; t++)
{
for (int j = 0; j < states; j++)
{
minState = 0;
minWeight = a[0, t - 1] - System.Math.Log(A[0, j]);

for (int i = 1; i < states; i++)
{
weight = a[i, t - 1] - System.Math.Log(A[i, j]);

if (weight < minWeight)
{
minState = i;
minWeight = weight;
}
}

a[j, t] = minWeight - System.Math.Log(B[j, observations[t]]);
s[j, t] = minState;
}
}

// Find minimum value for time T-1
minState = 0;
minWeight = a[0, T - 1];

for (int i = 1; i < states; i++)
{
if (a[i, T - 1] < minWeight)
{
minState = i;
minWeight = a[i, T - 1];
}
}

int[] path = new int[T];
path[T - 1] = minState;

for (int t = T - 2; t >= 0; t--)
path[t] = s[path[t + 1], t + 1];

probability = System.Math.Exp(-minWeight);
return path;
}


### Learning

The third and last problem is the problem of learning the most likely parameters that best models a system given a set of sequences originated from this system. Most implementations I've seem did not consider the problem of learning from a set of sequences, but only from a single sequence at a time. The algorithm below, however, is fully suitable to learn from a set of sequences and also uses scaling, which is another thing I have not seem in other implementations.

The source code follows the original algorithm by Rabiner (1989). There are, however, some known issues with the algorithms detailed in Rabiner's paper. More information about those issues is available in a next section of this article entitled “Remarks”.

/// <span class="code-SummaryComment"><summary>
</span>///   Runs the Baum-Welch learning algorithm for hidden Markov models.
/// <span class="code-SummaryComment"></summary>
</span>/// <span class="code-SummaryComment"><remarks>
</span>///   Learning problem. Given some training observation sequences O = {o1, o2, ..., oK}
///   and general structure of HMM (numbers of hidden and visible states), determine
///   HMM parameters M = (A, B, pi) that best fit training data.
/// <span class="code-SummaryComment"></remarks>
</span>/// <span class="code-SummaryComment"><param name="iterations" />
</span>///   The maximum number of iterations to be performed by the learning algorithm. If
///   specified as zero, the algorithm will learn until convergence of the model average
///   likelihood respecting the desired limit.
///
/// <span class="code-SummaryComment"><param name="observations" />
</span>///   An array of observation sequences to be used to train the model.
///
/// <span class="code-SummaryComment"><param name="tolerance" />
</span>///   The likelihood convergence limit L between two iterations of the algorithm. The
///   algorithm will stop when the change in the likelihood for two consecutive iterations
///   has not changed by more than L percent of the likelihood. If left as zero, the
///   algorithm will ignore this parameter and iterates over a number of fixed iterations
///   specified by the previous parameter.
///
/// <span class="code-SummaryComment"><returns>
</span>///   The average log-likelihood for the observations after the model has been trained.
/// <span class="code-SummaryComment"></returns>
</span>public double Learn(int[][] observations, int iterations, double tolerance)
{
if (iterations == 0 && tolerance == 0)
throw new ArgumentException("Iterations and limit cannot be both zero.");

// Baum-Welch algorithm.

// The Baum–Welch algorithm is a particular case of a generalized expectation-maximization
// (GEM) algorithm. It can compute maximum likelihood estimates and posterior mode estimates
// for the parameters (transition and emission probabilities) of an HMM, when given only
// emissions as training data.

// The algorithm has two steps:
//  - Calculating the forward probability and the backward probability for each HMM state;
//  - On the basis of this, determining the frequency of the transition-emission pair values
//    and dividing it by the probability of the entire string. This amounts to calculating
//    the expected count of the particular transition-emission pair. Each time a particular
//    transition is found, the value of the quotient of the transition divided by the probability
//    of the entire string goes up, and this value can then be made the new value of the transition.

int N = observations.Length;
int currentIteration = 1;
bool stop = false;

double[] pi = Probabilities;
double[,] A = Transitions;

// Initialization
double[][, ,] epsilon = new double[N][, ,]; // also referred as ksi or psi
double[][,] gamma = new double[N][,];

for (int i = 0; i < N; i++)
{
int T = observations[i].Length;
epsilon[i] = new double[T, States, States];
gamma[i] = new double[T, States];
}

// Calculate initial model log-likelihood
double oldLikelihood = Double.MinValue;
double newLikelihood = 0;

do // Until convergence or max iterations is reached
{
// For each sequence in the observations input
for (int i = 0; i < N; i++)
{
var sequence = observations[i];
int T = sequence.Length;
double[] scaling;

// 1st step - Calculating the forward probability and the
//            backward probability for each HMM state.
double[,] fwd = forward(observations[i], out scaling);
double[,] bwd = backward(observations[i], scaling);

// 2nd step - Determining the frequency of the transition-emission pair values
//            and dividing it by the probability of the entire string.

// Calculate gamma values for next computations
for (int t = 0; t < T; t++)
{
double s = 0;

for (int k = 0; k < States; k++)
s += gamma[i][t, k] = fwd[t, k] * bwd[t, k];

if (s != 0) // Scaling
{
for (int k = 0; k < States; k++)
gamma[i][t, k] /= s;
}
}

// Calculate epsilon values for next computations
for (int t = 0; t < T - 1; t++)
{
double s = 0;

for (int k = 0; k < States; k++)
for (int l = 0; l < States; l++)
s += epsilon[i][t, k, l] = fwd[t, k] * A[k, l] * bwd[t + 1, l] * B[l, sequence[t + 1]];

if (s != 0) // Scaling
{
for (int k = 0; k < States; k++)
for (int l = 0; l < States; l++)
epsilon[i][t, k, l] /= s;
}
}

// Compute log-likelihood for the given sequence
for (int t = 0; t < scaling.Length; t++)
newLikelihood += Math.Log(scaling[t]);
}

// Average the likelihood for all sequences
newLikelihood /= observations.Length;

// Check if the model has converged or we should stop
if (checkConvergence(oldLikelihood, newLikelihood,
currentIteration, iterations, tolerance))
{
stop = true;
}

else
{
// 3. Continue with parameter re-estimation
currentIteration++;
oldLikelihood = newLikelihood;
newLikelihood = 0.0;

// 3.1 Re-estimation of initial state probabilities
for (int k = 0; k < States; k++)
{
double sum = 0;
for (int i = 0; i < N; i++)
sum += gamma[i][0, k];
pi[k] = sum / N;
}

// 3.2 Re-estimation of transition probabilities
for (int i = 0; i < States; i++)
{
for (int j = 0; j < States; j++)
{
double den = 0, num = 0;

for (int k = 0; k < N; k++)
{
int T = observations[k].Length;

for (int l = 0; l < T - 1; l++)
num += epsilon[k][l, i, j];

for (int l = 0; l < T - 1; l++)
den += gamma[k][l, i];
}

A[i, j] = (den != 0) ? num / den : 0.0;
}
}

// 3.3 Re-estimation of emission probabilities
for (int i = 0; i < States; i++)
{
for (int j = 0; j < Symbols; j++)
{
double den = 0, num = 0;

for (int k = 0; k < N; k++)
{
int T = observations[k].Length;

for (int l = 0; l < T; l++)
{
if (observations[k][l] == j)
num += gamma[k][l, i];
}

for (int l = 0; l < T; l++)
den += gamma[k][l, i];
}

// avoid locking a parameter in zero.
B[i, j] = (num == 0) ? 1e-10 : num / den;
}
}

}

} while (!stop);

// Returns the model average log-likelihood
return newLikelihood;
}


## Using the code

Lets suppose we have gathered some sequences from a system we wish to model. The sequences are expressed as a integer array such as:

int[][] sequences = new int[][]
{
new int[] { 0,1,1,1,1,1,1 },
new int[] { 0,1,1,1 },
new int[] { 0,1,1,1,1 },
new int[] { 0,1, },
new int[] { 0,1,1 },
};


For us, it can be obvious to see that the system is outputting sequences that always start with a zero and have one or more ones at the end. But lets try to fit a Hidden Markov Model to predict those sequences.
// Creates a new Hidden Markov Model with 2 states for
//  an output alphabet of two characters (zero and one)
HiddenMarkovModel hmm = new HiddenMarkovModel(2, 2);

// Try to fit the model to the data until the difference in
//  the average likelihood changes only by as little as 0.01
hmm.Learn(sequences, 0.01);


Once the model is trained, lets test to see if it recognizes some sequences:
// Calculate the probability that the given
//  sequences originated from the model
double l1 = hmm.Evaluate(new int[] { 0, 1 });       // l1 = 0.9999
double l2 = hmm.Evaluate(new int[] { 0, 1, 1, 1 }); // l2 = 0.9999

double l3 = hmm.Evaluate(new int[] { 1, 1 });       // l3 = 0.0000
double l4 = hmm.Evaluate(new int[] { 1, 0, 0, 0 }); // l4 = 0.0000


Of course the model performs well as this a rather simple example. A more useful test case would consist of allowing for some errors in the input sequences in the hope that the model will become more tolerant to measurement errors.
int[][] sequences = new int[][]
{
new int[] { 0,1,1,1,1,0,1,1,1,1 },
new int[] { 0,1,1,1,0,1,1,1,1,1 },
new int[] { 0,1,1,1,1,1,1,1,1,1 },
new int[] { 0,1,1,1,1,1         },
new int[] { 0,1,1,1,1,1,1       },
new int[] { 0,1,1,1,1,1,1,1,1,1 },
new int[] { 0,1,1,1,1,1,1,1,1,1 },
};

// Creates a new Hidden Markov Model with 3 states for
//  an output alphabet of two characters (zero and one)
HiddenMarkovModel hmm = new HiddenMarkovModel(2, 3);

// Try to fit the model to the data until the difference in
//  the average likelihood changes only by as little as 0.0001
hmm.Learn(sequences, 0.0001);

// Calculate the probability that the given
//  sequences originated from the model
double l1 = hmm.Evaluate(new int[] { 0,1 });      // 0.9999
double l2 = hmm.Evaluate(new int[] { 0,1,1,1 });  // 0.9166

double l3 = hmm.Evaluate(new int[] { 1,1 });      // 0.0000
double l4 = hmm.Evaluate(new int[] { 1,0,0,0 });  // 0.0000

double l5 = hmm.Evaluate(new int[] { 0,1,0,1,1,1,1,1,1 }); // 0.0342
double l6 = hmm.Evaluate(new int[] { 0,1,1,1,1,1,1,0,1 }); // 0.0342


We can see that, despite having a very low probability, the likelihood values for the sequences containing a simulated measurement error are greater than the likelihoods for the sequences which do not follow the sequence structure at all.

In a subsequent article, we will see that those low values for the likelihoods will not be a problem because HMMs are often used in sets to form sequence classifiers. When used in such configurations, what really matters is which HMM returns the highest probability among others in the set.

## Remarks

A practical issue in the use of Hidden Markov Models to model long sequences is the numerical scaling of conditional probabilities. The probability of observing a long sequence given most models is extremely small, and the use of these extremely small numbers in computations often leads to numerical instability, making application of HMMs to genome length sequences quite challenging.

There are two common approaches to dealing with small conditional probabilities. One approach is to rescale the conditional probabilities using carefully designed scaling factors, and the other approach is to work with the logarithms of the conditional probabilities. For more information on using logarithms please see the work entitled “Numerically Stable Hidden Markov Model Implementation”, by Tobias P. Mann.

### Known issues

The code on this article is based on the Tutorial by Rabiner. There are, however, some problems with the scaling and other algorithms. An errata depicting all issues is available in the website “An Erratum for ‘A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition’” and is maintained by Ali Rahimi. I have not yet verified if the implementation presented here also suffers from the same mistakes explained there. This code has worked well under many situations, but I cannot guarantee its perfectness. Please use at your own risk.

## Acknowledgements

Thanks to Guilherme C. Pedroso, for the help with the Baum-Welch generalization for multiple input sequences. He has also co-written a very interesting article using hidden Markov models for gesture recognition, entitled “Automatic Recognition of Finger Spelling for LIBRAS based on a Two-Layer Architecture” published in the 25th Symposium On Applied Computing (ACM SAC 2010).

## You may also be interested in...

Engineer Xerox Research Center Europe
Brazil
Computer and technology enthusiast, interested in artificial intelligence and image processing. Has a Master's degree on Computer Science specialized on Image and Signal Processing, with expertise on Machine Learning, Computer Vision, Pattern Recognition and Data Mining systems. Author of the Accord.NET Framework for developing scientific computing applications.

If you would like to hire good developers to build your dream application, please check out DaitanGroup, one of the top outsourcing companies in Brazil. This company, located in Brazil's Sillicon Valley but with US-based offices, has huge experience developing telecommunications software for large and small companies worldwide.

 Re: HMM and gesture recognition César de Souza 15-Dec-12 8:29
 Re: HMM and gesture recognition Member 8089204 15-Dec-12 9:10
 Re: HMM and gesture recognition César de Souza 15-Dec-12 10:50
 Re: HMM and gesture recognition [modified] Member 8089204 15-Dec-12 11:53
 Re: HMM and gesture recognition César de Souza 15-Dec-12 12:32
 Re: HMM and gesture recognition [modified] Member 8089204 15-Dec-12 13:01
 Re: HMM and gesture recognition [modified] Member 8089204 16-Dec-12 4:10
 Re: HMM and gesture recognition César de Souza 16-Dec-12 5:50
 Re: HMM and gesture recognition [modified] Member 8089204 19-Dec-12 10:57
 Re: HMM and gesture recognition César de Souza 19-Dec-12 15:18
 My vote of 5 prasaducsc 27-Nov-12 1:42
 Hand motion tracking Using HMM prasaducsc 5-Nov-12 3:57
 Re: Hand motion tracking Using HMM César de Souza 5-Nov-12 4:13
 Re: Hand motion tracking Using HMM prasaducsc 22-Nov-12 13:05
 Nice Le Trung Kien 23-Oct-12 8:16
 Thanks! antilon 30-Sep-12 13:08
 Noob Question: Will not compile on MonoDevelop on OSX uwnanopore 20-Aug-12 11:42
 Re: Noob Question: Will not compile on MonoDevelop on OSX uwnanopore 21-Aug-12 11:15
 autoregressive HMM ssamson 23-Jul-12 3:55
 Re: autoregressive HMM César de Souza 15-Aug-12 16:04
 a few questions ssamson 13-Jul-12 7:14
 Re: a few questions César de Souza 13-Jul-12 7:26
 Re: a few questions ssamson 13-Jul-12 7:39
 Re: a few questions César de Souza 13-Jul-12 7:58
 My vote of 5 Member 9176703 28-Jun-12 4:38
 Source code Member 9176703 28-Jun-12 2:34
 Re: Source code César de Souza 28-Jun-12 3:25
 Re: Source code Member 9176703 28-Jun-12 4:39
 My vote of 5 Vietdungiitb 11-Jun-12 15:14
 help about code markov [modified] MAJIDBAGHERI 16-Apr-12 9:18
 Have you tried to use this algorithm for POS tagging Irtiza Ahmed 4-Dec-11 13:02
 prediction of urls greeshmag 24-Nov-11 22:03
 My vote of 5 Filip D'haene 26-May-11 0:36
 what compiler should I use? yuriythebest 12-Jan-11 4:36
 Re: what compiler should I use? César de Souza 12-Jan-11 5:02
 Re: what compiler should I use? yuriythebest 12-Jan-11 10:54
 My vote of 5 Edex123 24-Dec-10 1:33
 My vote of 5 Rick Dean 14-Dec-10 3:31
 My vote of 5 T-luv 7-Dec-10 6:39
 My vote of 5 Glebowski 2-Dec-10 21:19
 My vote of 5 sjelen 30-Nov-10 0:40
 My vote of 5 wizardzz 29-Nov-10 9:37
 My vote of 2 Paul Boothroyd 29-Nov-10 3:52
 hidden markov chain Petr_Kropotkin 29-Nov-10 3:29
 Re: hidden markov chain César de Souza 29-Nov-10 6:12
 This is great! Thank you! Sound Logic 19-Sep-10 13:13
 Re: This is great! Thank you! [modified] César de Souza 19-Sep-10 16:47
 Last Visit: 31-Dec-99 18:00     Last Update: 30-Aug-14 14:55 Refresh 12 Next »