Ask Your Question
0

Symbolic calculations in Parallel inside Sage?

asked 2012-09-08 05:32:08 +0100

owari gravatar image

Hello all,

It appears to me that Python has a number of tools to work with multi cores/nodes for parallel calculations: multiprocessing, MPI for Python, parallel Python, Celery, Scoop and etc. (If I am right!). However, right now I am living inside Sage and see the Python world from its sight, and from Sage I see this question asked in the present forum two years before, wherein W.Stein explains that "The Sage roadmap, such as it is, doesn't address parallel computing" and "Check out the multiprocessing module in Python but watch out -- sometimes you really have to understand how certain things in Sage works to safely use it". I am almost new to Sage and don't know really what's going on inside it, and also I have a number (I mean really great number) of functions over which some heavy symbolic calculations (integrations and etc.) are to be applied, however, the calculations on each function is independent of the others, so that I was thinking about now that Sage has both Python and Maxima included in one place maybe I can handle independent tasks of the symbolic world using the paralleling facilities of the Python world?

Is it possible or you may suggest SymPy instead? I already have a code inside Sage so prefer to continue living inside Sage but I just doubt if it is as compatible with Python as might be SymPy (?), also I am not sure if SymPy (which has not reached its first stable release yet) would be as strong in calculations as is Maxima in Sage ?

Any guide would be appreciated,

Best Regards

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2012-09-08 06:46:08 +0100

Volker Braun gravatar image

If your problem is trivially parallel (independent tasks) then you can just use the @parallel decorator:

sage: @parallel
....: def f(x):
....:     return x^2
....: 
sage: list(f(range(10)))
[(((0,), {}), 0), (((1,), {}), 1), (((2,), {}), 4), (((3,), {}), 9), (((4,), {}), 16), (((5,), {}), 25), (((6,), {}), 36), (((7,), {}), 49), (((8,), {}), 64), (((9,), {}), 81)]

This will use all cores of your computer to work in parallel. Of course you can use any of the other Python tools you mentioned to distribute your work tasks.

edit flag offensive delete link more

Comments

Thanks, if I am right the parallel decorator only allows a regular function to be called with a list of inputs (instead of the argument originally defined for it), whose values will be computed in parallel, and this is not something that I desire, however, that you added "you can use any of the other Python tools you mentioned to distribute your work tasks" is what I had hope to hear! But now Is there any comparison made between all those parallel processing methods in Python? Which one is easier to be implemented (with minor modification of the code) with enough control (but not too much control like usable only for experts) on managing which calculations to be sent to which cpu (on a PC or a cluster), also which method is in overall efficient? Is there any document about it? Thanks again

owari gravatar imageowari ( 2012-09-08 10:42:02 +0100 )edit

Whenever you are dealing with trivial parallelism then you will end up using map/reduce. Having to control which cpu core / cluster node works on a given task is totally opposite to what these frameworks are trying to achieve. Maybe you should ask your actual question instead of an abstract treatise of the pros and cons of different approaches.

Volker Braun gravatar imageVolker Braun ( 2012-09-08 14:39:01 +0100 )edit

Ok, suppose I have an iteration loop during each iteration a number of functions being updated within some expressions containing a number of symbolic differentiation and integration. These updating processes are not coupled, so that inside each step of the loop there are plenty of functions that stand in a queue to be updated, one after another, each being considerably time consuming, whereas each of them can be updated exploiting one cpu core/node cluster so that the whole updating step occurs in a time of the order of one function update time. This was, if the number of functions to be updated inside the loop is small such a parallelization will be useless but if they are many then it will be a big advantage to use a parallel algorithm.

owari gravatar imageowari ( 2012-09-08 17:05:48 +0100 )edit

I don't think it be easy to manage parallel processing for computing also each heavy symbolic integration, existing inside some series within each function-update during each step of the iteration loop, but if it is then it may help reducing the time consumption by the code very considerably! [My real code, albeit in its simplest and smallest version with 12 functions being iterated for, is the one I linked to above in my question, if it may help to see also its details.]

owari gravatar imageowari ( 2012-09-08 17:16:34 +0100 )edit
1

Your outer loop just isn't parallelizable, each iteration step depends on the previous. You can only hope to do some steps of the inner loop in parallel. The maximum gain is limited by the fixed number of tasks in the inner loop.

Volker Braun gravatar imageVolker Braun ( 2012-09-09 07:07:38 +0100 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2012-09-08 05:32:08 +0100

Seen: 1,305 times

Last updated: Sep 08 '12