1 | initial version |

A markov chain is just a stochastic matrix:

```
sage: m = matrix([[0.1, 0.2, 0.7], [0.5, 0.5, 0], [0.2, 0.2, 0.6]])
sage: m
[0.100000000000000 0.200000000000000 0.700000000000000]
[0.500000000000000 0.500000000000000 0.000000000000000]
[0.200000000000000 0.200000000000000 0.600000000000000]
```

As it has positive entries, it is ergodic. Now, given a starting probability `P0`

, you can iterate `m`

on it:

```
sage: P0 = vector([0.2, 0.3, 0.5]) ; P0
(0.200000000000000, 0.300000000000000, 0.500000000000000)
sage: P = P0*m^123 ; P
(0.259740259740260, 0.285714285714286, 0.454545454545454)
```

Now, if you want to sample according to this probability vector, you can make 3 boxes of the interval `[0,1]`

whose length correspond to your probability vector, pick a uniformly random point in `[0,1]`

, and check in which box it falls:

```
sage: def sample(P):
....: s = 0
....: a = random()
....: for box in range(len(P)):
....: s += P[box]
....: if a <= s:
....: return box
sage: sample(P)
1
sage: sample(P)
2
sage: sample(P)
0
sage: sample(P)
0
sage: sample(P)
1
sage: sample(P)
2
```

2 | No.2 Revision |

A markov chain is just a stochastic matrix:

```
sage: m = matrix([[0.1, 0.2, 0.7], [0.5, 0.5, 0], [0.2, 0.2, 0.6]])
sage: m
[0.100000000000000 0.200000000000000 0.700000000000000]
[0.500000000000000 0.500000000000000 0.000000000000000]
[0.200000000000000 0.200000000000000 0.600000000000000]
```

As it has positive entries, it is ergodic. Now, given a starting probability `P0`

, you can iterate `m`

on it:

```
sage: P0 = vector([0.2, 0.3, 0.5]) ; P0
(0.200000000000000, 0.300000000000000, 0.500000000000000)
sage: P =
```~~P0*m^123 ~~P0 * m^123 ; P
(0.259740259740260, 0.285714285714286, 0.454545454545454)

Now, if you want to sample according to this probability vector, you can make 3 boxes of the interval `[0,1]`

whose length correspond to your probability vector, pick a uniformly random point in `[0,1]`

, and check in which box it falls:

```
sage: def sample(P):
....: s = 0
....: a = random()
....: for box in range(len(P)):
....: s += P[box]
....: if a <= s:
....: return box
sage: sample(P)
1
sage: sample(P)
2
sage: sample(P)
0
sage: sample(P)
0
sage: sample(P)
1
sage: sample(P)
2
```

Looking for the steady-state probability, you can notice that Sage does not like to find eigenvectors on floating-point fields:

```
sage: m.left_eigenvectors()
NotImplementedError: eigenspaces cannot be computed reliably for inexact rings such as Real Field with 53 bits of precision,
consult numerical or symbolic matrix classes for other options
```

So you can approach it by iterating a lot:

```
sage: vector([1,0,0]) * m^1000
(0.259740259740259, 0.285714285714285, 0.454545454545454)
```

Otherwise, you can work on an exact field like the rationals:

```
sage: def steady_state(m):
....: for eig in m.left_eigenvectors():
....: if eig[0] == 1:
....: v = eig[1][0]
....: return v/sum(v)
sage: m = matrix([[1/10,2/10,7/10],[5/10,5/10,0],[2/10,2/10,6/10]]) ; m
[1/10 1/5 7/10]
[ 1/2 1/2 0]
[ 1/5 1/5 3/5]
sage: steady_state(m)
(20/77, 2/7, 5/11)
sage: steady_state(m).change_ring(RR)
(0.259740259740260, 0.285714285714286, 0.454545454545455)
```

3 | No.3 Revision |

A markov chain is just a stochastic matrix:

```
sage: m = matrix([[0.1, 0.2, 0.7], [0.5, 0.5, 0], [0.2, 0.2, 0.6]])
sage: m
[0.100000000000000 0.200000000000000 0.700000000000000]
[0.500000000000000 0.500000000000000 0.000000000000000]
[0.200000000000000 0.200000000000000 0.600000000000000]
```

As ~~it ~~this one has positive entries, it is ergodic. Now, given a starting probability `P0`

, you can iterate `m`

on it:

```
sage: P0 = vector([0.2, 0.3, 0.5]) ; P0
(0.200000000000000, 0.300000000000000, 0.500000000000000)
sage: P = P0 * m^123 ; P
(0.259740259740260, 0.285714285714286, 0.454545454545454)
```

Now, if you want to sample according to this probability vector, you can make 3 boxes of the interval `[0,1]`

whose length correspond to your probability vector, pick a uniformly random point in `[0,1]`

, and check in which box it falls:

```
sage: def sample(P):
....: s = 0
....: a = random()
....: for box in range(len(P)):
....: s += P[box]
....: if a <= s:
....: return box
sage: sample(P)
1
sage: sample(P)
2
sage: sample(P)
0
sage: sample(P)
0
sage: sample(P)
1
sage: sample(P)
2
```

Looking for the steady-state probability, you can notice that Sage does not like to find eigenvectors on floating-point fields:

```
sage: m.left_eigenvectors()
NotImplementedError: eigenspaces cannot be computed reliably for inexact rings such as Real Field with 53 bits of precision,
consult numerical or symbolic matrix classes for other options
```

So you can approach it by iterating a lot:

```
sage: vector([1,0,0]) * m^1000
(0.259740259740259, 0.285714285714285, 0.454545454545454)
```

Otherwise, you can work on an exact field like the rationals:

```
sage: def steady_state(m):
....: for eig in m.left_eigenvectors():
....: if eig[0] == 1:
....: v = eig[1][0]
....: return v/sum(v)
sage: m = matrix([[1/10,2/10,7/10],[5/10,5/10,0],[2/10,2/10,6/10]]) ; m
[1/10 1/5 7/10]
[ 1/2 1/2 0]
[ 1/5 1/5 3/5]
sage: steady_state(m)
(20/77, 2/7, 5/11)
sage: steady_state(m).change_ring(RR)
(0.259740259740260, 0.285714285714286, 0.454545454545455)
```

4 | No.4 Revision |

A markov chain is just a stochastic matrix:

```
sage: m = matrix([[0.1, 0.2, 0.7], [0.5, 0.5, 0], [0.2, 0.2, 0.6]])
sage: m
[0.100000000000000 0.200000000000000 0.700000000000000]
[0.500000000000000 0.500000000000000 0.000000000000000]
[0.200000000000000 0.200000000000000 0.600000000000000]
```

As this one has positive entries, it is ergodic. Now, given a starting probability `P0`

, you can iterate `m`

on it:

```
sage: P0 = vector([0.2, 0.3, 0.5]) ; P0
(0.200000000000000, 0.300000000000000, 0.500000000000000)
sage: P = P0 * m^123 ; P
(0.259740259740260, 0.285714285714286, 0.454545454545454)
```

`[0,1]`

whose length correspond to your probability vector, pick a uniformly random point in `[0,1]`

, and check in which box it falls:

```
sage: def sample(P):
....: s = 0
....: a = random()
....: for box in range(len(P)):
....: s += P[box]
....: if a <= s:
....: return box
sage: sample(P)
1
sage: sample(P)
2
sage: sample(P)
0
sage: sample(P)
0
sage: sample(P)
1
sage: sample(P)
2
```

Looking for the steady-state probability, you can notice that Sage does not like to find eigenvectors on floating-point fields:

```
sage: m.left_eigenvectors()
NotImplementedError: eigenspaces cannot be computed reliably for inexact rings such as Real Field with 53 bits of precision,
consult numerical or symbolic matrix classes for other options
```

So you can approach it by iterating a lot:

```
sage: vector([1,0,0]) * m^1000
(0.259740259740259, 0.285714285714285, 0.454545454545454)
```

Otherwise, you can work on an exact field like the ~~rationals:~~rationals, and look for the (normalized) eigenvector whose eigenvalue is `1`

:

```
sage: def steady_state(m):
....: for eig in m.left_eigenvectors():
....: if eig[0] == 1:
....: v = eig[1][0]
....: return v/sum(v)
sage: m = matrix([[1/10,2/10,7/10],[5/10,5/10,0],[2/10,2/10,6/10]]) ; m
[1/10 1/5 7/10]
[ 1/2 1/2 0]
[ 1/5 1/5 3/5]
sage: steady_state(m)
(20/77, 2/7, 5/11)
sage: steady_state(m).change_ring(RR)
(0.259740259740260, 0.285714285714286, 0.454545454545455)
```

5 | No.5 Revision |

A markov chain is just a stochastic matrix:

```
sage: m = matrix([[0.1, 0.2, 0.7], [0.5, 0.5, 0], [0.2, 0.2, 0.6]])
sage: m
[0.100000000000000 0.200000000000000 0.700000000000000]
[0.500000000000000 0.500000000000000 0.000000000000000]
[0.200000000000000 0.200000000000000 0.600000000000000]
```

As this one has positive entries, it is ergodic. Now, given a starting probability `P0`

, you can iterate `m`

on it:

```
sage: P0 = vector([0.2, 0.3, 0.5]) ; P0
(0.200000000000000, 0.300000000000000, 0.500000000000000)
sage: P = P0 * m^123 ; P
(0.259740259740260, 0.285714285714286, 0.454545454545454)
```

`[0,1]`

whose length correspond to your probability vector, pick a uniformly random point in `[0,1]`

, and check in which box it falls:

```
sage: def sample(P):
....: s = 0
....: a = random()
....: for box in range(len(P)):
....: s += P[box]
....: if a <= s:
....: return box
sage: sample(P)
1
sage: sample(P)
2
sage: sample(P)
0
sage: sample(P)
0
sage: sample(P)
1
sage: sample(P)
2
```

```
sage: m.left_eigenvectors()
NotImplementedError: eigenspaces cannot be computed reliably for inexact rings such as Real Field with 53 bits of precision,
consult numerical or symbolic matrix classes for other options
```

So you can approach it by iterating a lot:

```
sage: vector([1,0,0]) * m^1000
(0.259740259740259, 0.285714285714285, 0.454545454545454)
```

Otherwise, you can work on an exact field like the rationals, and look for the (normalized) eigenvector whose eigenvalue is `1`

:

```
sage: def steady_state(m):
....: for eig in m.left_eigenvectors():
....: if eig[0] == 1:
....: v = eig[1][0]
....: return v/sum(v)
sage: m = matrix([[1/10,2/10,7/10],[5/10,5/10,0],[2/10,2/10,6/10]]) ; m
[1/10 1/5 7/10]
[ 1/2 1/2 0]
[ 1/5 1/5 3/5]
sage: steady_state(m)
(20/77, 2/7, 5/11)
sage: steady_state(m).change_ring(RR)
(0.259740259740260, 0.285714285714286, 0.454545454545455)
```

So, you can check it is quite close to the numerical approach above.

6 | No.6 Revision |

A markov chain is just a stochastic matrix:

```
sage: m = matrix([[0.1, 0.2, 0.7], [0.5, 0.5, 0], [0.2, 0.2, 0.6]])
sage: m
[0.100000000000000 0.200000000000000 0.700000000000000]
[0.500000000000000 0.500000000000000 0.000000000000000]
[0.200000000000000 0.200000000000000 0.600000000000000]
```

As this one has positive entries, it is ergodic. Now, given a starting probability `P0`

, you can iterate `m`

on it:

```
sage: P0 = vector([0.2, 0.3, 0.5]) ; P0
(0.200000000000000, 0.300000000000000, 0.500000000000000)
sage: P = P0 * m^123 ; P
(0.259740259740260, 0.285714285714286, 0.454545454545454)
```

`[0,1]`

whose length correspond to your probability vector, pick a uniformly random point in `[0,1]`

, and check in which box it falls:

```
sage: def sample(P):
....: s = 0
....: a = random()
....: for box in range(len(P)):
....: s += P[box]
....: if a <= s:
....: return box
sage: sample(P)
1
sage: sample(P)
2
sage: sample(P)
0
sage: sample(P)
0
sage: sample(P)
1
sage: sample(P)
2
```

Looking for the steady-state probability, you can approach it by iterating the matrix `m`

a lot:

```
sage: vector([1,0,0]) * m^1000
(0.259740259740259, 0.285714285714285, 0.454545454545454)
```

The convergence (quality and speed) is guaranteed by Perron Frobenius theorem.

If you are not sure how big should be the exponend to reach the steady-state, you can let Sage discover a fixed point by itself:

```
sage: def steady_state_iteration(m):
....: # just in case vector like (1,0,0,0) falls in the contracting space.
....: a = random()
....: V = vector([a, 1-a] + [0 for i in range(len(m[0])-2)])
....: while True:
....: W = V * m
....: W = W/sum(W)
....: if V == W:
....: return V
....: else:
....: V = W
sage: steady_state_iteration(m)
(0.25974025974, 0.285714285714, 0.454545454545)
```

You can also ask Sage for the dominating eigenvector. But you will notice that Sage does not ~~like to find ~~accept to compute eigenvectors on ~~floating-point fields:~~the `Real Field`

because no efficient and reliable was written there:

```
sage: m.left_eigenvectors()
NotImplementedError: eigenspaces cannot be computed reliably for inexact rings such as Real Field with 53 bits of precision,
consult numerical or symbolic matrix classes for other options
```

~~So you can approach it by iterating a lot:~~

```
sage: vector([1,0,0]) * m^1000
(0.259740259740259, 0.285714285714285, 0.454545454545454)
```

Otherwise, you can A first possibility is to work on an exact field like the rationals, and look for the (normalized) eigenvector whose eigenvalue is `1`

:

`sage: def `~~steady_state(m):
~~steady_state_eigenvector(m):
....: for eig in m.left_eigenvectors():
....: if eig[0] == 1:
....: v = eig[1][0]
....: return v/sum(v)
sage: m = matrix([[1/10,2/10,7/10],[5/10,5/10,0],[2/10,2/10,6/10]]) ; m
[1/10 1/5 7/10]
[ 1/2 1/2 0]
[ 1/5 1/5 3/5]
sage: ~~steady_state(m)
~~steady_state_eigenvector(m)
(20/77, 2/7, 5/11)
sage: ~~steady_state(m).change_ring(RR)
~~steady_state_eigenvector(m).change_ring(RR)
(0.259740259740260, 0.285714285714286, 0.454545454545455)

So, you can check it is quite close to the numerical approach above.

Another possiblity (suggested by @ppurka in comments), is to work in the `Real Double Field`

:

```
sage: m = m.change_ring(RDF)
sage: m.left_eigenvectors()
[(1.0, [(0.435503556309, 0.47905391194, 0.762131223541)], 1),
(-0.1, [(0.707106781187, 3.40584039133e-16, -0.707106781187)], 1),
(0.3, [(-0.348742916231, -0.464990554975, 0.813733471207)], 1)]
```

But, here you get,

```
sage: steady_state_eigenvector(m)
sage:
```

This is due to the fact that `1.0`

given in the answer is not precisely equal to `1`

:

sage: m.left_eigenvectors()[0][0] == 1 False

Hence, you have to look for approximate 1:

```
sage: def steady_state_eigenvector_approximate(m, tol=2^(-50)):
....: for eig in m.left_eigenvectors():
....: if (eig[0] - 1) <= tol:
....: v = eig[1][0]
....: return v/sum(v)
sage: steady_state_eigenvector_approximate(m)
(0.25974025974, 0.285714285714, 0.454545454545)
```

Copyright Sage, 2010. Some rights reserved under creative commons license. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.