ASKSAGE: Sage Q&A Forum - RSS feedhttps://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Thu, 14 Jul 2022 02:06:06 +0200multiprocessing makes elements of a custom monoid incompatiblehttps://ask.sagemath.org/question/63243/multiprocessing-makes-elements-of-a-custom-monoid-incompatible/**UPD.** In fact, this issue appears with any multiprocessing of monoid's elements and it's not specific to `@parallel` decorator.
---
Let's consider an example of `FiniteMonoidFromMultiplicationTable` defined in https://ask.sagemath.org/question/32064/
I define this monoid and a list of three its elements and compute their product as follows:
F = FiniteMonoidFromMultiplicationTable([[0, 0, 0], [0, 1, 1], [0, 1, 2]])
E1 = [ F(0), F(1), F(2) ]
print('product1:', prod(E1) )
It prints `product1: 0` - so far so good.
---
Now let's define the parallel `id` function that simply returns its argument, and apply it (in parallel fashion) to `E1` to obtain list `E2` as follows:
@parallel
def id(x):
return x
E2 = [ e for _,e in id(E1) ]
print('product2:', prod(E2) )
However, `prod(E2)` here results in the error
> TypeError: unsupported operand parent(s) for *: 'Finite monoid on 3 elements given by its multiplication table' and 'Finite monoid on 3 elements given by its multiplication table'
which is rather weird. While `E2[0].parent() == E2[1].parent()` evaluates to `True` here, `E2[0].parent() is E2[1].parent()` evaluates to `False` and that's probably the cause for the error above.
So, my questions are: why elements of `E2` have different parents and what needs to be changed to avoid the described issue?
PS. Just in case, here is [the full code at Sagecell](https://sagecell.sagemath.org/?z=eJyNVE2L2zAQvRv8H4a9rAzGJDkuG9hLDD0sFFroIQ1CG09SgSy5srJNKP3vHUn-zKY0xgRLGr335s1MDtbU0IojFq2zp707WSwaYVE7kHVjrIPPYZUmhyFyLxwejZXYFi3W8mjNqWn78C_DzvTKCI4Ka8Ljv6xoGrT9tU3c_hZ30yRN9kq0LZRSS4evRhtZlQT3elJONkqSBmn0V_GmkEWF2VOaAD0VHoBzf41z1qI65OB8WH_uH79d8LAN63g8Hka44gqjS_qyHhNkWRHlsSzzinv2xpqKsu0unnO4TLktkg86SGATHdtz8S7UCXfbS_cxw-QWGxu13AB7iDqgDj6B0fD7D3ROt3CU76jh7QKSFvXMwJj7Q3EwthaOKdRTTfO0eF-6vdGxnsb27lTCiakuvyZnpXYsHI0nVFSkei_geR2jnuGa9aZXQ-OEvpjSzjSST-Q98n9axe7oqHy8NTxThfmVMRx_Dn1i3A-0N3jdpcGoCWQbVzEUhK7g-wfGWYuuI2y3nnNr_C-3Ni7gDUA9QpyxbvbYfAanOHFYq-g91fWTpmEYYe5w_7qa8T-GZXlnbGj5fFL6NCmJ6J7x324XOfh3l4P_XNI7fq52fpA2SwLbQskWxFiyZfhdZbBLk8b6Ln3shnb59JiHAWabZQahzC-kVSiFKk18krJi5z6zLqOzD9usAgUCzRLwHKn5fayH-cCyGllIRPYX9T-s9g==&lang=sage&interacts=eJyLjgUAARUAuQ==) illustrating the issue.Max AlekseyevThu, 14 Jul 2022 02:06:06 +0200https://ask.sagemath.org/question/63243/How to use cached_function and parallel togetherhttps://ask.sagemath.org/question/55498/how-to-use-cached_function-and-parallel-together/I have a function `f(a,b,c)` which I'd like to save the output values of, so I'm currently using the `@cached_function` decorator to do so:
@cached_function
def f(a,b,c):
m = g(a+b)
n = h(c)
out = (expensive calculation involving m and m)
return m+n
@cached_function
def g(d):
out = (expensive calculation involving d)
return out
@cached_function
def h(c):
out = (expensive calculation involving c)
return out
I would now like to calculate the values of this function for a number of triples `(a,b,c) = (a1,b1,c1), (a2,b2,c2), ..., (aN,bN,cN)`, and since there are many such triples I'd like to calculate them in parallel. From some Googling it seems that I should use the `@parallel` decorator, but
1. How do I use the parallel decorator to calculate values of a function which has more than one input parameter?
2. Is it possible for the calculations of these individual values of `f` to share memory somehow? The calculation of my function `f` depends on other expensive functions `g` and `h` and thus I have also used the `@cached_function` decorator on them; it would be preferable for `g` and `h` to not be run on the same input if multiple processes are used to perform the calculation of `f`.BakerbakuraFri, 29 Jan 2021 16:49:37 +0100https://ask.sagemath.org/question/55498/Parallel computation for different functionshttps://ask.sagemath.org/question/50013/parallel-computation-for-different-functions/For a single function with a list of inputs, the @parallel decoration can be used to do parallel computation. I am wandering whether it is possible to do parallel computation for different functions.
A simple is example is to calculate the difference f-g of two independent functions f and g. How can I ask SageMath to simultaneously compute the values of f and g, and then calculate the difference?
My time measurements suggest that when calculating the difference f-g, SageMath actually calculates f and g one after another, and then take the difference.
I can think of a naive approach using the @parallel decoration. I can create a function whose input variable is the name of the functions I want to compute simultaneously. Then I use a list of function names as input to get a generator of outputs. This may work if the final result doesn't depend on the order of the outputs, but does not work if the order matters, for example when taking the difference.
In general, suppose I have a flow chart for the computation, in other words, I already know which jobs can be parallelized and how the information flows to the next stage. Then what is the best way to implement the flow chart?dzhy444Fri, 21 Feb 2020 08:37:16 +0100https://ask.sagemath.org/question/50013/Several matrix multiplications over binary fieldshttps://ask.sagemath.org/question/38251/several-matrix-multiplications-over-binary-fields/Hi! I need to compute the product of several matrices with entries in a binary field. Since products do not depend on each other, computations can be easily distributed among the processors. However results of sequencial code and in processes are almost the same!
Here some code of a simple test that represents what I want to achieve:
from sage.all import *
import multiprocessing as mp
from multiprocessing.pool import ThreadPool, cpu_count
import time
def matrix_mult(A, B):
start = time.time()
A*B
return time.time() - start
def matrix_mult_proc(A, B, n):
avg = 0.0
for i in range(n):
start = time.time()
A*B
avg += (time.time() - start)
# print('time: {}'.format(time.time() - start))
print('Average per call proc {}: {}'.format(os.getpid(), avg / n))
@parallel
def matrix_mult_parallel(A, B):
start = time.time()
A*B
return time.time() - start
# print('time: {}'.format(time.time() - start))
iters = 40
n = 100
Fq = GF(2**16, 'X')
MS = MatrixSpace(Fq, n)
A = MS.random_element()
start = time.time()
tms = map(lambda i: matrix_mult(A, A), range(iters))
print('\n*** sequencial:\nAverage per call: {}\nTotal: {}'.format(sum(tms) / iters,time.time() - start))
nthreads = cpu_count()
pool = ThreadPool(nthreads)
start = time.time()
tms = pool.map(lambda i: matrix_mult(A, A), range(iters))
print('\n*** multithread {} threads:\nAverage per call: {}\nTotal: {}'.format(nthreads, sum(tms) / iters,time.time() - start))
nprocs = cpu_count()
procs = []
start = time.time()
print('\n*** multiproc {} procs:'.format(nprocs))
for i in range(nprocs):
p = mp.Process(target=matrix_mult_proc, args=(A, A, iters//nprocs))#, out_q))
procs.append(p)
p.start()
for p in procs:
p.join()
print('Total: {}'.format(time.time() - start))
args = [(A, A)]*iters
start = time.time()
tms = map(lambda e: e[1], list(matrix_mult_parallel(args)))
print('\n*** sage_parallel\nAverage per call: {}\nTotal {}'.format(sum(tms) / iters, time.time() - start))
Results are as follow:
For sequencial
Average time of a matrix multiplication: 0.279646992683
Total time for 40 multiplications: 11.1862668991
For Threadpool with 4 threads maximum
Average time of a matrix multiplication: 0.280531394482
Total time for 40 multiplications: 11.2248089314
For 4 processes in a 4 core computer (2 physical):
Average time of a matrix multiplication: 1.13726825714
Total time for 40 multiplications: 11.7641329765
With sage's @parallel decorator:
Average time of a matrix multiplication: 1.1256641984
Total time for 40 multiplications: 11.7641329765
I don't understand why multiplications seem to take a proportional amount of time to the number of processes. Same behavior on a 8 core machine.
Hope someone can explain. Thanks in advance.egonzalezFri, 14 Jul 2017 04:17:23 +0200https://ask.sagemath.org/question/38251/Caching with @parallelhttps://ask.sagemath.org/question/37497/caching-with-parallel/So I have a function which I want to run in a parallel manner as I'll need to run the function a few million times. The thing is, I know that quite a few times, I'll have duplicate information. So I want to speed up the process even more by skipping the ones I don't need. Here's what I have so far:
@parallel(ncpus=7)
def parallel_function(initial,allElts,e1,e2):
untested = set([initial])
verified = set([])
potentials = getPotentials(e1,e2)
while untested:
curTest = untested.pop()
verified.add(curTest)
for e in allElts:
newElt = someProp(potentials,curTest,e)
if newElt not in verified:
untested.add(newElt)
return verified
def testElts(allElts, potentials):
initial = allElts[0]
returnInfo = {}
for tup in list(parallel_function([(initial,allElts,e1,e2) for e1 in allElts for e2 in allElts)):
returnInfo[tup[0][0][2]] = tup[1]
***NOTE*** The above is an exmple. I've tried to dumb it down to try and give a better feel of the functions I'm working with. So consider the above more "pseudo" code than real code in case I missed something
We're assuming that allElts has at least 1 million elements, so this function will take a while. Notice that the parallel_function is a fairly intense function itself, but given any two element, the "getPotentials" function might return similar results. So basically I want to skip my while loop if I've already tested these "potentials" before.
I've tried looking into how caching works in parallel environments, but there doesn't seem to be too much. Would caching in this case be possible with @cache_function? If not, can I do a dictionary that stores the values and access the information that way? Since parallel processing creates forks I assume that means the dictionary information is not necessarily accessible by everyone? Or is that not accurate?
Any help on this would be beneficial. Thanks.aram.dermenjianWed, 03 May 2017 16:02:30 +0200https://ask.sagemath.org/question/37497/How to efficiently calculate a sum of arrays with numpy and @parallel decorator?https://ask.sagemath.org/question/36437/how-to-efficiently-calculate-a-sum-of-arrays-with-numpy-and-parallel-decorator/ Hello!
I have an algorithm to process a huge array by chunks. Each processing operation results in a matrix of size N*N, I need to calculate a sum of these matrices. For simplicity assume processing function does almost nothing and requires no input - just returns zeros. In that case working example looks like this:
import datetime
import numpy as np
import time
N = 1024 * 2
K = 256
def f():
return np.ones((N, N), dtype=np.complex128)
buffer = np.zeros((N, N), dtype=np.complex128)
start_time = datetime.datetime.now()
for i in range(K):
buffer += f()
print 'Elapsed time:', (datetime.datetime.now() - start_time)
Execution takes about 5 seconds on my PC. Now, as function f becomes more complex, I would like to run in parallel, so I modify code as follows:
import datetime
import numpy as np
N = 1024 * 2
K = 256
@parallel
def f(_):
return np.ones((N, N), dtype=np.complex128)
start_time = datetime.datetime.now()
for o in f(range(K)):
buffer += o[1]
print 'Elapsed time:', (datetime.datetime.now() - start_time)
And now it takes about 26 seconds to calculate! What am I doing wrong? Or what causes such a huge overhead? (it looks silly for if the cost of collecting the result of f() across parallel processes is more than calculating one iteration of f() itself, I better run f() without parallelism at all)
EugeneThu, 02 Feb 2017 10:27:41 +0100https://ask.sagemath.org/question/36437/Random numbers in parallel calculationshttps://ask.sagemath.org/question/26046/random-numbers-in-parallel-calculations/ Assume I have a function that requires random number (noise) like:
@parallel
def foo(i):
print np.random.random()
4 sequential runs yield desired output:
for i in range(4):
foo(i)
0.961718217227
0.909042125122
0.736138778296
0.149902522071
But the parallel run calculates only one random value:
list(foo(range(4)))
0.633760965726
0.633760965726
0.633760965726
0.633760965726
[(((0),{}),None),(((2),{}),None),(((1),{}),None),(((3),{}),None)]
How do I properly generate random values from within parallel function?EugeneFri, 06 Mar 2015 14:33:37 +0100https://ask.sagemath.org/question/26046/Performance issues with parallel decorationhttps://ask.sagemath.org/question/26007/performance-issues-with-parallel-decoration/Experimenting with `@parallel` resulted in unexpected performance issues in Sage 6.4.1. Here is a very simple example:
@parallel(p_iter='multiprocessing', ncpus=6)
def f(n):
return factor(n)
t=walltime()
r = range(1,1000000)
p = sorted(list( f(r)))
print walltime(t)
82.0724880695
t=walltime()
for i in range(1,1000000):
factor(i)
print walltime(t)
12.1648099422
I have 6 physical cores, yet the serial calculation runs more than 6 times faster, even though I can see 6 instances of python running on my computer. Maybe it is pilot error, I have the following questions:
1) Does Sage require a special way of compiling it in order to take full advantage of @parallel?
2) In this case using 'fork' is even worse, it never completes the calculation.
3) How does @parallel distribute the calculations? Since, in general, it takes significantly longer for factor() to process larger numbers, it seems that assigning the case n=1,7,13,... to core_0, n=2,8,14,... to core_1, etc., makes sense. Shuffling the original serial list given to f(n) also seems plausible. However, dividing the whole serial range to 6 intervals and assigning them to the 6 cores, respectively, would be a bad choice and for most of the time only one or two python processes would do anything. Does anyone know what scheme is used in Sage?
Thanks for any suggestions.
ikolTue, 03 Mar 2015 02:04:53 +0100https://ask.sagemath.org/question/26007/