# The performance speed of "len(str(n))" method and "a^b % p" method in Sage

Hi.

Using the same Jupyter Notebook, I've found that identical codes between SageMath and Python turns out that the running time of SageMath shows up much faster than the same code written in Python.

Those codes are as follows.

```
import timeit
def modular():
return 3**10000000 % (2**127-1)
def countlength():
return len(str(3**10000000))
print(timeit.timeit(modular, number=1))
print(timeit.timeit(countlength, number=1))
```

In SageMath, the running time is less than 1 second for both functions. On the other hand, the running time for "modular' is 5 times slower in Python and for the "countlengh" function, there isn't even an answer from Python.

It makes me curious, what's the magic behind the speed of SageMath?

Recently I'm working on algorithm and coding optimization so any advice from a senior would be grateful.

Thanks.

It should be more efficient to compute the first function as

`Mod(3,2**127-1)**10000000`

which relies on modular exponentiation.@MaxAlekseyev: I've tried by myself and indeed. Thank you for so useful tip.

In Sage, even faster is:

`R = Integers(2**127 - 1); R(3)**10000000`

: Sage's modular arithmetic is faster than Python's.John - it's same as in my comment above.

No, that's what I mean:

`%timeit R(3)**10000000`

gives a faster time than`%timeit Mod(3,2**127-1)**10000000`

. With`R`

defined as I did and with`b=2**127-1`

, then`%timeit Mod(3, b, R)**10000000`

is comparable. So if you're going to do many computations, either pass the`parent`

argument to`Mod`

or compute inside the ring`R`

.