Ask Your Question

Mustafa's profile - activity

2020-12-30 13:07:18 +0200 received badge  Popular Question (source)
2020-12-30 13:07:18 +0200 received badge  Notable Question (source)
2020-12-27 15:00:53 +0200 received badge  Notable Question (source)
2019-01-13 09:58:49 +0200 received badge  Famous Question (source)
2018-06-16 21:49:30 +0200 received badge  Notable Question (source)
2018-02-03 17:51:10 +0200 received badge  Popular Question (source)
2016-06-07 11:02:08 +0200 received badge  Notable Question (source)
2014-01-09 14:01:43 +0200 received badge  Popular Question (source)
2013-10-23 22:04:46 +0200 received badge  Popular Question (source)
2013-06-20 09:10:16 +0200 received badge  Taxonomist
2011-01-11 10:41:45 +0200 marked best answer Time-consuming of compiled files

First, while compiling something will often cause a slight increase in speed when the loop itself is a problem, mostly the real speedup comes not just from compiling functions but by marking them up with type information ("cdef int a") and so on.

The reason interpreted languages like Python are slower than compiled languages isn't because of the interpreter, that's a common misconception. (If that were so then removing the interpreter loop and executing the bytecode directly would be fast, and it's not: people have tried.) The reason they're slow is more because of "multiple dispatch": when you give Python "a+b", it has to check the types of a and b, see whether there's an add method, etc., all of which takes lots and lots of cycles while C just compiles to (say) one flop and a copy. So the real gain comes from specifying types, but of course this comes at a cost: C doesn't have a type corresponding to an arbitrary-length integer, for example.

As for your particular functions, you can understand the behaviour if you actually look at what the functions return (I added a "return f" to the end of each function, which is something I think you probably should have done first):


sage: time speedtest(10**2)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00 s
648037159632282329931500412681871807401802252780713670001839379
sage: time speedtest_compiled(10**2)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00 s
1232110414200870712562346350972327740484463519610977249336410934754319533890407441902694104728894548432498279447853775120001L

Two things are going on: the first is that the operations are binding differently, so it's parsing f=17*f^2 % 123.. as f=17*f^(2 %123..), or f=17*f^2, so no mod is being taken. The second is that the "^" is being interpreted as an XOR operation (like in C), not as an exponentiation. To see what code is actually being generated, I looked in .sage/temp, and it showed that it was interpreting it as


 *     f = 1
 *     for i in range(0,n):
 *         f= 17*f^2 % 1234565432134567898765456787654323456787654345678765456787654321             # <<<<<<<<<<<<<<
 *     return f
 * 
 */
    __pyx_t_3 = PyNumber_Multiply(__pyx_int_17, __pyx_v_f); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 9; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_3);
    __pyx_t_4 = PyNumber_Xor(__pyx_t_3, __pyx_int_2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 9; __pyx_clineno = __LINE__; goto __pyx_L1_error;}
    __Pyx_GOTREF(__pyx_t_4);
    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
    __Pyx_DECREF(__pyx_v_f);
    __pyx_v_f = __pyx_t_4;
    __pyx_t_4 = 0;
  }
  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
which isn't what we wanted. The following code
def speedtest_compiled_v2(n):
    f = 1   
    for i in range(0,n):
        f=(17*f**2) % 1234565432134567898765456787654323456787654345678765456787654321
    return f
should return the right values, but it still won't be faster. Where the real speed gain comes in is when you can do this:

#pure python
def speedtest_v3(n):
    f = 1    
    for i in range(n):
        f= (17*f*f) % 1234565
    return f

# cython
def speedtest_compiled_v3(long n):
    cdef long i,f
    f = 1    
    for i in range(n):
        f= (17*f ...
(more)
2011-01-11 10:36:10 +0200 commented answer Time-consuming of compiled files

Thank you very much - I did lots of mistakes (Im beginner in Sage/Python and not much experience in programming in general, so that comment helps!). I experiment with my implementations of modern integer-factoring algorithm, so the integers have usually >>20 digits - so theres no straight-forward way to speed up with Cython I guess?

2011-01-11 08:01:38 +0200 received badge  Editor (source)
2011-01-11 08:00:50 +0200 asked a question Time-consuming of compiled files

I have made two files, first named "speedtest.sage", second "speedtest_compiled.spyx"

speedtest.sage:

def speedtest(n):
 f = 1
 for i in range(0,n):
    f=17*f^2 % 1234565432134567898765456787654323456787654345678765456787654321

speedtest.spyx:

def speedtest_compiled(n):
 f = 1
 for i in range(0,n):
    f=17*f^2 % 1234565432134567898765456787654323456787654345678765456787654321

They are equal (modulo names of the speedtest-function). I have this output on my notebook:

sage: load speedtest.sage
sage: load speedtest.spyx
Compiling speedtest.spyx...

sage: time speedtest(10^4)
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s
Wall time: 0.02 s

sage: time speedtest(10^5)
CPU times: user 0.14 s, sys: 0.02 s, total: 0.16 s
Wall time: 0.16 s

sage: time speedtest(10^6)
CPU times: user 1.39 s, sys: 0.10 s, total: 1.49 s
Wall time: 1.50 s

sage: time speedtest_compiled(10^4)
CPU times: user 0.08 s, sys: 0.00 s, total: 0.08 s
Wall time: 0.08 s

sage: time speedtest_compiled(10^5)
CPU times: user 7.87 s, sys: 0.00 s, total: 7.87 s
Wall time: 7.87 s

The interpretated Python-Code of speedtest() behaves absolutely normal, as you can see - BUT the compiled Cython-Code behaves strange, not only that it is so extremely slow, its consumed time is not even close to linear in n (notice that I could not evaluate speedtest_compiled(10^6), it took too long. What is the explanation for this - and how to compile correctly to gain the 10-1000x speedup I've read about?

2011-01-11 07:52:02 +0200 commented answer Efficient kernel-computing of sparse GF2 matrices

Ok, thank you, perhaps it will be implemented in coming versions.

2011-01-10 12:40:07 +0200 commented answer Efficient kernel-computing of sparse GF2 matrices

First didn't change the speed, probably it uses the same algorithm with sparse=True?! Did not find one of the algorithm, Im a bit surprised about that, they should be of general interest, but Im new to sage, maybe I just didnt see it. Yes, implementation is always possible, but I hoped for done work :)

2011-01-09 22:30:58 +0200 asked a question Efficient kernel-computing of sparse GF2 matrices

To compute the kernel of a sparse GF(2) matrix A I simply do

A.kernel()

But for (highly) sparse GF(2) matrices there are (many) much faster algorithm to compute the kernel, surely some of them are implemented in sage, but by which command I apply them? (Tutorial and online help could'nt answer that)

2011-01-09 16:06:01 +0200 commented answer Converting kernel to matrix

That is it! Thank you.

2011-01-09 16:05:36 +0200 marked best answer Converting kernel to matrix

(Showing my work).

The first thing I tried was the obvious:

sage: vectors = [[1,0,1],[1,0,1],[1,1,0]]
sage: A = matrix(GF(2),vectors)
sage: B = A.kernel()
sage: matrix(B)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[...]
ValueError: Invalid matrix constructor.  Type matrix? for help

which didn't work. So looked at B:

sage: B
Vector space of degree 3 and dimension 1 over Finite Field of size 2
Basis matrix:
[1 1 0]

and saw that it clearly knows about the matrix I think you're looking for. In Sage, functionality often lives inside objects, so if you type help(B) or or type "B." and then tab, you can see the list of internal functions, and I found both .matrix and .basis_matrix. I think either should work here, so:

sage: M = B.basis_matrix()
sage: M
[1 1 0]
sage: type(M)
<type 'sage.matrix.matrix_mod2_dense.Matrix_mod2_dense'>
sage: M.nrows()
1

Is that what you were after?

2011-01-09 16:05:36 +0200 received badge  Scholar (source)
2011-01-09 07:15:35 +0200 asked a question Converting kernel to matrix

I construct a matrix A and compute its kernel by

A = matrix(GF(2),vectors)
B = A.kernel()

Now I want to convert B to a matrix, such that I can use .nrows() and other matrix-methods, how to do that?

2011-01-08 16:37:16 +0200 received badge  Student (source)
2011-01-08 11:07:17 +0200 commented answer Compiling .spyx-files

Yes, sorry, thank you Mike and DSM - it seems as if I intended to annoy people here with this. I did correct that in my posting, but didn't realize that this could be the mistake ... To be sure: On my machine theres absolutely no speed-up of the compiled version (its even a bit slower), whys that?

2011-01-08 11:05:44 +0200 received badge  Supporter (source)
2011-01-08 08:11:29 +0200 asked a question Compiling .spyx-files

This is an implementation of the pollard-rho factorization-algorithm in sage (filename: exp.sage)

def pollard_rho(n):
    x = 2
    y = x
    t = 1

    while t == 1:
        x = mod(x,n)^2+1
        y = mod(mod(y,n)^2+1,n)^2+1
        t = gcd(x-y,n)

    if t<n:
       return(t)
    else:
       return 0

I load it with "load exp.sage" and it works.

But trying to compile

import sage.all

def pollard_rho(n):
    x = 2
    y = x
    t = 1

    while t == 1:
        x = sage.all.mod(x,n)**2+1
        y = sage.all.mod(sage.all.mod(y,n)**2+1,n)**2+1
        t = sage.all.gcd(x-y,n)

    if t<n:
       return(t)
    else:
       return 0

named as "exp.spyx" brings several errors, one of them (f.e.):

 13     while t == 1:
 14       x = sage.all.mod(x,n)**2+1
 ---> 15       y = sage.all.mod(sage.mod(y,n)**2+1,n)**2+1
 16       t = sage.all.gcd(x-y,n)
 17     

 AttributeError: 'module' object has no attribute 'mod'

What is wrong here? Probably many things - im totally new to Sage, coming from Pari-gp and work in console-mode - no experiences with C or Python. Thank you.