1 | initial version |

Iterating the matrix a lot is used to find the steady state, there is no iteration in picking a "random" element "in" the steady state, it is just about picking a random element in `[0,1]`

and look where it falls according to the steady state, you won't lose much time there even for quite big matrices.

It should be also noticed that, since the matrix is positive, iterating will go quite fast to the dominating eigenvector (thanks to Perron Frobenius theorem there is no numerical precision issue since the corresponding line is attractive), this roughly corresponds to the ergodic hypothesis you mentioned. Hence, we are lucky to look for the dominating eigenvector. In our example:

```
sage: m^30
[ 0.25974025974 0.285714285714 0.454545454545]
[ 0.25974025974 0.285714285714 0.454545454545]
[ 0.25974025974 0.285714285714 0.454545454545]
```

So 30 iterations are enough here to find the steady state.

Notice also that atlas can handle quite big matrices, if you want to take advantage of atlas, you also have to use a matrix with entries in `RDF`

as well (for the same reason as for the `.left_eigenvectors()`

method: some special software included in Sage only deal with the double precision floating point numbers that can be directly handled by the processor (not the one emulated by MPFR like in `RR`

)). You can get those functionalities by simply doing:

```
sage: m = m.change_ring(RDF)
```

Thanks @ppurka for mentioning that `.left_eigenvectors()`

method works with matrices in `RDF`

i will update my answer accordingly.

2 | No.2 Revision |

Iterating the matrix a lot is used to find the steady state, there is no iteration in picking a "random" element "in" the steady state, it is just about picking a random element in `[0,1]`

and look where it falls according to the steady state, you won't lose much time there even for quite big matrices.

It should be also noticed that, since the matrix is positive, iterating will go quite fast to the dominating eigenvector (thanks to Perron Frobenius theorem there is no numerical precision issue since the corresponding line is attractive), this roughly corresponds to the ergodic hypothesis you mentioned. Hence, we are lucky to look for the dominating eigenvector. In our example:

`sage: `~~m^30
[ 0.25974025974 0.285714285714 0.454545454545]
[ 0.25974025974 0.285714285714 0.454545454545]
[ 0.25974025974 0.285714285714 0.454545454545]
~~m^32
[0.259740259740260 0.285714285714286 0.454545454545454]
[0.259740259740260 0.285714285714286 0.454545454545454]
[0.259740259740260 0.285714285714286 0.454545454545455]

So ~~30 ~~32 iterations are enough here to find the steady state.

Notice also that atlas can handle quite big matrices, if you want to take advantage of atlas, you also have to use a matrix with entries in `RDF`

as well (for the same reason as for the `.left_eigenvectors()`

method: some special software included in Sage only deal with the double precision floating point numbers that can be directly handled by the processor (not the one emulated by MPFR like in `RR`

)). You can get those functionalities by simply doing:

```
sage: m = m.change_ring(RDF)
```

Thanks @ppurka for mentioning that `.left_eigenvectors()`

method works with matrices in `RDF`

i will update my answer accordingly.

Copyright Sage, 2010. Some rights reserved under creative commons license. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.