# Parallelizing Polyhedral volume computation

For an application, I need to compute volumes of convex polytopes of dimension of about 300. The polytopes of interest have a restricted form:

$$P:={ (x_1,x_2,\ldots,x_n) } ~\text{with} \ 0\leq x_i\leq x_{i+1} \leq L,\ l\leq x_i - x_{i+1} \leq u$$ where $L,u,l$ are known positive constants.

On my laptop, the polytope volume computation routine takes about 3 hours to compute for $n=9$, and freezes for larger $n$, making it useless for most interesting cases of my application.

Is there an easy way to parallelize the volume routine?

Perhaps by a straightforward use of the [multiprocessing] module, when running on a cluster ?

A more advanced use case, which also arises in my application, lies in computing integrals over the polytope: $\int_{P} f(\mathbf{x}) d\mathbf{x}$, where $f$ is a symmetric function of its arguments. I would greatly be interested in techniques that extend to this case as well.

edit retag close merge delete

1

In version 7.6.beta6, there is a computational engine, LattE integrale, that is interfaced with Sage. See #18232 and related tickets. It allows to handle your second use case out-of-the-box! (technically, once installed the optional latte_int package). For the volume computation, it adds an alternative to the polytope triangulation method, namely a cone-decomposition-based algorithm, see LattE documentation for further details. for curiosity, can you put the data somewhere? (SageMathCloud?)

( 2017-03-08 22:36:47 +0200 )edit

Does latte integrale scale to 300 dimensions?

( 2017-03-09 17:33:59 +0200 )edit

I was asking for the data because it would be easy for me to give it a try :) but I cannot help you much further than that.

( 2017-03-09 18:37:06 +0200 )edit

Sort by ยป oldest newest most voted

In sage 9.2+ (and probably since a while), you can use the optional package pynormaliz to do parallel volume computations. If you have sage compiled with

MAKE='make -j64'


one a computer with 64 processors you should use that many processors (I hope at least, it worked for me for 8).

The following used 8 threads for me:

sage: P = polytopes.cyclic_polytope(10,20, backend='normaliz')   # optional - pynormaliz
sage: P.volume()  # optional pynormaliz
904076817914638457425434331054080000000

more