Is parallel computation with mpi4py still supported?
I am working with Sage code which calculates many independent instances of a problem and is thus fully parallelizable. Now, I would like to port it to a multi-node cluster environment.
My current idea is to use a main process which manages a problem instance queue from which worker processes fetch instances, solve them and save their results to the file system. In the multi-node environment, I understand that some form of message passing is needed to communicate the problem instances to the workers (ruling out task queue management with the @parallel
decorator). I have found that the Python package mpi4py
provides Python bindings for the Message Passing Interface. It also implements a very convenient MPIPoolExecutor class which manages such a task queue.
The current Sage (9.3) documentation includes a thematic tutorial on mpyi4py mentioning that mpi4py
is supported in Sage by means of an optional package. However, I do not find it in the list of optional packages for Sage 9.3 and on my current v9.2 install, the method optional_packages()
also does not list it (and neither the other required openmpi
package).
Is mpi4py
still supported in a current Sage version? Would it be much effort to try and build it for the current version like it was done for openmpi
in Sage Trac ticket 8537?
Or are there other recommendations for task distribution with Sage in a multi-node environment?
The optional mpi4py package was deleted because it didn't build with Python 3 back then. That is already long ago, and mpi4py now works with Python 3. Could you install openmpi using your system's package manager, and run
sage -pip install mpi4py
to install mpi4py? I don't know if the optional packages did any other magic, so you should check how far you get with that.Also, to use Sage stuff in your Python script you should do
import sage.all
first. Then the above instructions seem to work for me.Thank you! It did work! I wrote this as an answer, but couldn't accept it, and all the credit should of course go to you :)