Ask Your Question
1

Speeding up matrix multiplication?

asked 2011-11-04 17:46:03 +0200

Robert Pollack gravatar image

updated 2011-11-06 23:05:36 +0200

I'm currently trying write code to compute with overconvergent modular symbols. In iterating a Hecke operator, the key (i.e. most time consuming) operation that is performed tons of times is simply taking the product of a large dense matrix say $M$ with a vector $v$, both with integral entries.

More precisely, let $p$ be a (relatively small) prime (think $p=11$) and $N$ some integer (think 100). I have an $N$ by $N$ matrix and am interested in quickly computing the product $M \cdot v$ modulo $p^N$.

I am simply using the intrinsic SAGE command of multiplying a matrix by a vector, and I was surprised to see that working with matrices over ${\bf Z}/p^n{\bf Z}$ was much (i.e. 10 times) slower than working with matrices over ${\bf Z}$.

My question: is there a faster way to do this computation than using SAGE's intrinsic matrix times a vector command over ${\bf Z}$?

edit retag flag offensive close merge delete

2 Answers

Sort by » oldest newest most voted
2

answered 2011-11-12 11:22:49 +0200

William Stein gravatar image

Robert -- this could be fixed so it works exactly like you want, but it requires writing some code in Cython, and pretty advanced knowledge of Sage internals. One potential workaround is to simply multiply over ZZ, then reduce the answer, e.g., do something like this:

sage: M = random_matrix(ZZ,100)
sage: v = random_matrix(ZZ,100,1)
sage: timeit('(M*v) % 11^3')
625 loops, best of 3: 631 µs per loop

which beats all the following by a lot:

sage: M = random_matrix(ZZ,100)
sage: v = random_vector(Integers(11^3),100)
sage: timeit('M*v')
125 loops, best of 3: 2.55 ms per loop

sage: M = random_matrix(ZZ,100)
sage: v = random_matrix(Integers(11^3),100,1)
sage: timeit('M*v')
125 loops, best of 3: 4.83 ms per loop

sage: M = random_matrix(ZZ,100)
sage: v = random_vector(ZZ,100)
sage: timeit('(M*v) % 11^3')
125 loops, best of 3: 4.42 ms per loop

Incidentally, "write code to compute with overconvergent modular symbols" is something Ben Lundell and I at UW have been wanting to do in the near future to. Maybe we can coordinate? Email us.

edit flag offensive delete link more
1

answered 2011-11-06 15:24:33 +0200

parzan gravatar image

If numerical approximations are good enough for you you can speed things up:

sage: m = random_matrix(ZZ, 100, 100, x=0, y=10^100)
sage: v = random_vector(ZZ, 100, x=0, y=10^100)
sage: timeit('m*v')
125 loops, best of 3: 7.37 ms per loop

this is much faster:

sage: mr = matrix(RDF,m)
sage: vr = vector(RDF,v)
sage: timeit('mr*vr')
625 loops, best of 3: 367 µs per loop

If you supply the code which gives your bottleneck people might be more helpful in optimization ideas.

edit flag offensive delete link more

Comments

Thanks parzan. Unfortunately I need an exact computation. More precisely, I'm working modulo p^N where p^N is around 10^100. I'll edit my question accordingly. About supplying code, the computation really boils down to the one line M*v applied again and again where M is the large matrix, and v the vector.

Robert Pollack gravatar imageRobert Pollack ( 2011-11-06 22:51:23 +0200 )edit
1

It seems that multiplying matrices over GF(p) is a bit faster, at least if p is small. Can you work over several different small primes and extract the information you need from that?

John Palmieri gravatar imageJohn Palmieri ( 2011-11-07 01:31:29 +0200 )edit

I'll give it a try! Thanks.

Robert Pollack gravatar imageRobert Pollack ( 2011-11-07 09:34:05 +0200 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2011-11-04 17:46:03 +0200

Seen: 975 times

Last updated: Nov 12 '11