Ask Your Question
1

ergodic markov chain - steady state

asked 2013-06-11 22:57:59 +0200

this post is marked as community wiki

This post is a wiki. Anyone with karma >750 is welcome to improve it.

Hello every one

Is there anyway to define the ergodic markov chain or even the steady-state probability and select one state based on the given probability for each iteration??

Best Aissan

edit retag flag offensive close merge delete

Comments

Question looks too general, ask Google about "coupling from the past".

Bétréma gravatar imageBétréma ( 2013-06-12 11:39:54 +0200 )edit

2 Answers

Sort by » oldest newest most voted
0

answered 2013-06-13 10:24:40 +0200

tmonteil gravatar image

updated 2013-06-13 11:11:07 +0200

Iterating the matrix a lot is used to find the steady state, there is no iteration in picking a "random" element "in" the steady state, it is just about picking a random element in [0,1] and look where it falls according to the steady state, you won't lose much time there even for quite big matrices.

It should be also noticed that, since the matrix is positive, iterating will go quite fast to the dominating eigenvector (thanks to Perron Frobenius theorem there is no numerical precision issue since the corresponding line is attractive), this roughly corresponds to the ergodic hypothesis you mentioned. Hence, we are lucky to look for the dominating eigenvector. In our example:

sage: m^32
[0.259740259740260 0.285714285714286 0.454545454545454]
[0.259740259740260 0.285714285714286 0.454545454545454]
[0.259740259740260 0.285714285714286 0.454545454545455]

So 32 iterations are enough here to find the steady state.

Notice also that atlas can handle quite big matrices, if you want to take advantage of atlas, you also have to use a matrix with entries in RDF as well (for the same reason as for the .left_eigenvectors() method: some special software included in Sage only deal with the double precision floating point numbers that can be directly handled by the processor (not the one emulated by MPFR like in RR)). You can get those functionalities by simply doing:

sage: m = m.change_ring(RDF)

Thanks @ppurka for mentioning that .left_eigenvectors() method works with matrices in RDF i will update my answer accordingly.

edit flag offensive delete link more
1

answered 2013-06-12 11:49:11 +0200

tmonteil gravatar image

updated 2013-06-13 11:08:31 +0200

A markov chain is just a stochastic matrix:

sage: m = matrix([[0.1, 0.2, 0.7], [0.5, 0.5, 0], [0.2, 0.2, 0.6]])
sage: m
[0.100000000000000 0.200000000000000 0.700000000000000]
[0.500000000000000 0.500000000000000 0.000000000000000]
[0.200000000000000 0.200000000000000 0.600000000000000]

As this one has positive entries, it is ergodic. Now, given a starting probability P0, you can iterate m on it:

sage: P0 = vector([0.2, 0.3, 0.5]) ; P0
(0.200000000000000, 0.300000000000000, 0.500000000000000)

sage: P = P0 * m^123 ; P               
(0.259740259740260, 0.285714285714286, 0.454545454545454)

Now, if you want to sample according to this probability vector, you can make 3 boxes of the interval [0,1] whose length correspond to your probability vector, pick a uniformly random point in [0,1], and check in which box it falls:

sage: def sample(P):
....:     s = 0
....:     a = random()
....:     for box in range(len(P)):
....:         s += P[box]
....:         if a <= s:
....:             return box
sage: sample(P)
1
sage: sample(P)
2
sage: sample(P)
0
sage: sample(P)
0
sage: sample(P)
1
sage: sample(P)
2

Looking for the steady-state probability, you can approach it by iterating the matrix m a lot:

sage: vector([1,0,0]) * m^1000
(0.259740259740259, 0.285714285714285, 0.454545454545454)

The convergence (quality and speed) is guaranteed by Perron Frobenius theorem.

If you are not sure how big should be the exponend to reach the steady-state, you can let Sage discover a fixed point by itself:

sage: def steady_state_iteration(m):
....:     # just in case vector like (1,0,0,0) falls in the contracting space.
....:     a = random()
....:     V = vector([a, 1-a] + [0 for i in range(len(m[0])-2)])
....:     while True:
....:         W = V * m
....:         W = W/sum(W)
....:         if V == W:
....:             return V
....:         else:
....:             V = W

sage: steady_state_iteration(m)
(0.25974025974, 0.285714285714, 0.454545454545)

You can also ask Sage for the dominating eigenvector. But you will notice that Sage does not accept to compute eigenvectors on the Real Field because no efficient and reliable was written there:

sage: m.left_eigenvectors()
NotImplementedError: eigenspaces cannot be computed reliably for inexact rings such as Real Field with 53 bits of precision,
consult numerical or symbolic matrix classes for other options

A first possibility is to work on an exact field like the rationals, and look for the (normalized) eigenvector whose eigenvalue is 1:

sage: def steady_state_eigenvector(m):
....:     for eig in m.left_eigenvectors():
....:         if eig[0] == 1:
....:             v = eig[1][0]
....:             return v/sum(v)

sage: m = matrix([[1/10,2/10,7/10],[5/10,5/10,0],[2/10,2/10,6/10]]) ; m
[1/10  1/5 7/10]
[ 1/2  1/2    0]
[ 1/5  1/5  3/5]

sage: steady_state_eigenvector(m)
(20/77, 2/7, 5/11)
sage: steady_state_eigenvector(m).change_ring(RR)
(0.259740259740260, 0.285714285714286, 0.454545454545455)

So, you can check it is quite close to the numerical approach above.

Another possiblity (suggested by @ppurka in comments), is to work in the ... (more)

edit flag offensive delete link more

Comments

"Iterating a lot" is the obvious way for picking a "random" element "in" the steady state, problems arise with the specification of "a lot" :-) And in real life, matrices which describe Markov chains are rather large.

Bétréma gravatar imageBétréma ( 2013-06-13 09:12:51 +0200 )edit

It is not that sage does not *like* to compute eigenvalues in floating point fields; it is just that it can not guarantee the required precision in RR and be fast at the same time. You can specify everything to belong to RDF instead and sage will happily compute the eigenvectors for you. sage: matrix(RDF, [[1.0, 1.5],[2, 4]]).left_eigenvectors() [(0.208712152522, [(-0.929866873266, 0.367896178294)], 1), (4.79128784748, [(-0.466583903687, -0.884476941938)], 1)] Also, ticket 13660 is relevant in this context.

ppurka gravatar imageppurka ( 2013-06-13 09:35:22 +0200 )edit

No space enough for this comment box, see my comment below.

tmonteil gravatar imagetmonteil ( 2013-06-13 10:15:14 +0200 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2013-06-11 22:57:59 +0200

Seen: 1,770 times

Last updated: Jun 13 '13

Related questions