Time-consuming of compiled files
I have made two files, first named "speedtest.sage", second "speedtest_compiled.spyx"
speedtest.sage:
def speedtest(n):
f = 1
for i in range(0,n):
f=17*f^2 % 1234565432134567898765456787654323456787654345678765456787654321
speedtest.spyx:
def speedtest_compiled(n):
f = 1
for i in range(0,n):
f=17*f^2 % 1234565432134567898765456787654323456787654345678765456787654321
They are equal (modulo names of the speedtest-function). I have this output on my notebook:
sage: load speedtest.sage
sage: load speedtest.spyx
Compiling speedtest.spyx...
sage: time speedtest(10^4)
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s
Wall time: 0.02 s
sage: time speedtest(10^5)
CPU times: user 0.14 s, sys: 0.02 s, total: 0.16 s
Wall time: 0.16 s
sage: time speedtest(10^6)
CPU times: user 1.39 s, sys: 0.10 s, total: 1.49 s
Wall time: 1.50 s
sage: time speedtest_compiled(10^4)
CPU times: user 0.08 s, sys: 0.00 s, total: 0.08 s
Wall time: 0.08 s
sage: time speedtest_compiled(10^5)
CPU times: user 7.87 s, sys: 0.00 s, total: 7.87 s
Wall time: 7.87 s
The interpretated Python-Code of speedtest() behaves absolutely normal, as you can see - BUT the compiled Cython-Code behaves strange, not only that it is so extremely slow, its consumed time is not even close to linear in n (notice that I could not evaluate speedtest_compiled(10^6), it took too long. What is the explanation for this - and how to compile correctly to gain the 10-1000x speedup I've read about?