ASKSAGE: Sage Q&A Forum - RSS feedhttps://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Wed, 14 Jul 2021 10:13:53 +0200Problem with precompute and append in parallel computinghttps://ask.sagemath.org/question/58009/problem-with-precompute-and-append-in-parallel-computing/Here is a parallel computation involving precompute and append:
sage: L=[]
sage: @cached_function
....: def function(i):
....: L.append(i)
....: print(L)
....:
sage: function.precompute(range(5),num_processes=2)
[0]
[1]
[2]
[3]
[4]
The problem is that the list L is still empty at the end, the action `L.append(i)` was not done globally, whereas the list `L` is global.
sage: L
[]
How to solve this problem?Sébastien PalcouxWed, 14 Jul 2021 10:13:53 +0200https://ask.sagemath.org/question/58009/Is parallel computation with mpi4py still supported?https://ask.sagemath.org/question/57732/is-parallel-computation-with-mpi4py-still-supported/I am working with Sage code which calculates many independent instances of a problem and is thus fully parallelizable. Now, I would like to port it to a multi-node cluster environment.
My current idea is to use a main process which manages a problem instance queue from which worker processes fetch instances, solve them and save their results to the file system. In the multi-node environment, I understand that some form of message passing is needed to communicate the problem instances to the workers (ruling out task queue management with the `@parallel` decorator). I have found that [the Python package `mpi4py`](https://mpi4py.readthedocs.io/en/stable/) provides Python bindings for the Message Passing Interface. It also implements a very convenient [MPIPoolExecutor class](https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html#mpipoolexecutor) which manages such a task queue.
The current Sage (9.3) documentation includes a [thematic tutorial on mpyi4py](https://doc.sagemath.org/html/en/thematic_tutorials/numerical_sage/mpi4py.html) mentioning that `mpi4py` is supported in Sage by means of an optional package. However, I do not find it in the [list of optional packages for Sage 9.3](https://doc.sagemath.org/html/en/reference/spkg/index.html) and on my current v9.2 install, the method `optional_packages()` also does not list it (and neither the other required `openmpi` package).
Is `mpi4py` still supported in a current Sage version? Would it be much effort to try and build it for the current version like it was done for `openmpi` in [Sage Trac ticket 8537](https://trac.sagemath.org/ticket/8537)?
Or are there other recommendations for task distribution with Sage in a multi-node environment?FabianGThu, 24 Jun 2021 14:40:41 +0200https://ask.sagemath.org/question/57732/How to use cached_function and parallel togetherhttps://ask.sagemath.org/question/55498/how-to-use-cached_function-and-parallel-together/I have a function `f(a,b,c)` which I'd like to save the output values of, so I'm currently using the `@cached_function` decorator to do so:
@cached_function
def f(a,b,c):
m = g(a+b)
n = h(c)
out = (expensive calculation involving m and m)
return m+n
@cached_function
def g(d):
out = (expensive calculation involving d)
return out
@cached_function
def h(c):
out = (expensive calculation involving c)
return out
I would now like to calculate the values of this function for a number of triples `(a,b,c) = (a1,b1,c1), (a2,b2,c2), ..., (aN,bN,cN)`, and since there are many such triples I'd like to calculate them in parallel. From some Googling it seems that I should use the `@parallel` decorator, but
1. How do I use the parallel decorator to calculate values of a function which has more than one input parameter?
2. Is it possible for the calculations of these individual values of `f` to share memory somehow? The calculation of my function `f` depends on other expensive functions `g` and `h` and thus I have also used the `@cached_function` decorator on them; it would be preferable for `g` and `h` to not be run on the same input if multiple processes are used to perform the calculation of `f`.BakerbakuraFri, 29 Jan 2021 16:49:37 +0100https://ask.sagemath.org/question/55498/Running a for cycle in parallelhttps://ask.sagemath.org/question/56692/running-a-for-cycle-in-parallel/I have an algorithm that I need to speed up.
I would like to parallelize computations
the main part I need to speed up is the following
> for r2 in Subsets(V2,k-4):
check2(fuse(v,r2),k)
this gives as output three values b1, b2, b3 that are global and for any input check2 can either leave them unchanged or increase them by 1.
I tried looking into @parallel but it doens't seem to fit my requirements.
How can I parallelize that (I can use up to 16 cores)?
This is the whole code (minus all the parts that do, I can't publish it all):
P2=[]
def P_2(q,n,S): %creates the set P2
I2R=[]
I2S=[]
I3=[]
I3R=[]
I6=[]
I8=[]
def IG(k): %creates a set of indeces vectors
def coll(x,y,z): %checks if 3 given tuples are collinear
A=[x,y,z]
M = matrix(A)
if M.determinant()==0:
return 'true'
def cone(v): %defines a function to be used in conic()
def conic(x1,x2,x3,x4,x5,x6): %checks if 6 given tuples are on a conic
A=[cone(x1),cone(x2),cone(x3),cone(x4),cone(x5),cone(x6)]
M = matrix(A)
if M.determinant()==0:
return 'true'
def cube(v): %defines a function to be used in SIcubic()
def dxcube(v): %defines a function to be used in SIcubic()
def dycube(v): %defines a function to be used in SIcubic()
def dzcube(v): %defines a function to be used in SIcubic()
def SIcubic(x1,x2,x3,x4,x5,x6,x7,x8): %checks if 8 given tuples are on a special cubic
v1='false'
v=[x1,x2,x3,x4,x5,x6,x7,x8]
for a in range(8):
x=v[a]
A=[]
A.append(dxcube(x))
A.append(dycube(x))
A.append(dzcube(x))
for t in range(8):
if t!=a:
y=v[t]
A.append(cube(y))
M = matrix(A)
if M.determinant()==0:
v1='true'
return v1
b1=0
b2=0
b3=0
def check(s,k): %checks a vector s of lenght k
global b1
global b2
global b3
IG(k)
q1=0
q2=0
q3=0
for i3 in I3:
a0=i3[0]
a1=i3[1]
a2=i3[2]
if coll(s[a0],s[a1],s[a2])=='true':
q1=1
q2=1
q3=1
break
if q2==0:
if k>5:
if q==0:
for i6 in I6:
a0=i6[0]
a1=i6[1]
a2=i6[2]
a3=i6[3]
a4=i6[4]
a5=i6[5]
if conic(s[a0],s[a1],s[a2],s[a3],s[a4],s[a5])=='true':
q2=1
q3=1
break
if q3==0:
if k>7:
for i8 in I8:
a0=i8[0]
a1=i8[1]
a2=i8[2]
a3=i8[3]
a4=i8[4]
a5=i8[5]
a6=i8[6]
a7=i8[7]
if SIcubic(s[a0],s[a1],s[a2],s[a3],s[a4],s[a5],s[a6],s[a7])=='true':
q3=1
break
b1=b1+q1
b2=b2+q2
b3=b3+q3
return b1
return b2
return b3
b1=0
def check1(s,a): %checks a vector s and an element a
global b1
q1=0
for i2 in I2S:
a0=i2[0]
a1=i2[1]
if coll(s[a0],s[a1],a)=='true':
q1=1
break
b1=b1+q1
return b1
b1=0
b2=0
b3=0
def check2(s,k): %checks a vector s of lenght k
global b1
global b2
global b3
q1=0
q2=0
q3=0
for i3 in I3R:
a0=i3[0]
a1=i3[1]
a2=i3[2]
if coll(s[a0],s[a1],s[a2])=='true':
q1=1
q2=1
q3=1
break
if q2==0:
if k>5:
for i6 in I6:
a0=i6[0]
a1=i6[1]
a2=i6[2]
a3=i6[3]
a4=i6[4]
a5=i6[5]
if conic(s[a0],s[a1],s[a2],s[a3],s[a4],s[a5])=='true':
q2=1
q3=1
break
if q3==0:
if k>7:
for i8 in I8:
a0=i8[0]
a1=i8[1]
a2=i8[2]
a3=i8[3]
a4=i8[4]
a5=i8[5]
a6=i8[6]
a7=i8[7]
if SIcubic(s[a0],s[a1],s[a2],s[a3],s[a4],s[a5],s[a6],s[a7])=='true':
q3=1
break
b1=b1+q1
b2=b2+q2
b3=b3+q3
return b1
return b2
return b3
t=1
def TOT(P,q,n,k): %computes a polynomial depending on q,n,k,P
def fuse(s,t): %fuses two vectors into 1 (see repsonse to answere)
sn=[]
def Count(S,q,n,k): %main function
IG(k)
global b1
global b2
global b3
l1=0
l2=0
l3=0
y=q^n
s=len(S)
v=[(0,0,1),(0,1,0),(1,0,0),(1,1,1)]
V1=copy(S)
V1.remove((1,0,0))
V1.remove((0,1,0))
V1.remove((0,0,1))
V1.remove((1,1,1))
if k<4:
b1=0
for s in Subsets(S,k):
check(s,k)
l1=b1*factorial(k)
if k>4:
V2=copy(V1)
for r1 in V1:
b1=0
b2=0
b3=0
check1(v,r1)
if b1==1:
V2.remove(r1)
s2=len(V2)
l1=l1+binomial(s-4,k-4)-binomial(s2,k-4)
l2=l2+binomial(s-4,k-4)-binomial(s2,k-4)
l3=l3+binomial(s-4,k-4)-binomial(s2,k-4)
b1=0
b2=0
b3=0
for r2 in Subsets(V2,k-4):
check2(fuse(v,r2),k)
l1=l1+b1
l2=l2+b2
l3=l3+b3
l1=l1*(y^2+y+1)*(y^3-y)*(y^3-y^2)*factorial(k-4)
l2=l2*(y^2+y+1)*(y^3-y)*(y^3-y^2)*factorial(k-4)
l3=l3*(y^2+y+1)*(y^3-y)*(y^3-y^2)*factorial(k-4)
g1=t-l1
g2=t-l2
g3=t-l3
print('There are ', g1/factorial(k) , ' unordered', k,'-tuples in general linear position')
print('There are ', g1 , ' ordered', k ,'-tuples in general linear position')
print('There are ', g2/factorial(k) , ' unordered', k,'-tuples in general linear and conic position')
print('There are ', g2 , ' ordered', k ,'-tuples in general linear and conic position')
print('There are ', g3/factorial(k) , ' unordered', k,'-tuples in general position')
print('There are ', g3 , ' ordered', k ,'-tuples in general position')
def Active(q,n,k): %every fucntion I need to run reduced into 1
P2=[]
P_2(q,n,P2)
TOT(P2,q,n,k)
Count(P2,q,n,k)Alain NgalaniSat, 17 Apr 2021 19:21:14 +0200https://ask.sagemath.org/question/56692/parallel matrix rankhttps://ask.sagemath.org/question/52889/parallel-matrix-rank/ I have* a (relatively) large non-square symbolic matrix $M$ which I would like to calculate the rank of. The entries of the matrix are polynomials with coefficients in a (small) finite field, and the matrix is relatively sparse, although currently it's being treated as a non-sparse matrix.
The rank could simply be calculated as `M.rank()`, but I'm not sure whether or not the matrix rank algorithm implemented works in parallel or not. Could someone clarify whether or not parallel calculation of matrix rank is implemented in Sage or not? If it is implemented but isn't used by default, how do I make use of the parallel version?
Thanks!
(*) It's more accurate to say that I'm *going to have* such a matrix; my initial version of the code was written in Mathematica, and I'm currently porting the code to Sage after too many headaches with Mathematica's handling of finite fields, which could definitely be improved upon. So unfortunately I'm not yet at the stage of running `M.rank()` and checking whether or not it runs in parallel.BakerbakuraThu, 06 Aug 2020 13:08:00 +0200https://ask.sagemath.org/question/52889/Parallelizing symbolic matrix*vector computations for sparse matrices and vectorshttps://ask.sagemath.org/question/51560/parallelizing-symbolic-matrixvector-computations-for-sparse-matrices-and-vectors/Hi everyone,
I am currently running computations which involve a lot of operations of the type matrix*vector, where both objects are sparse and the result can be expected to be sparse as well. The matrices' base field is the symbolic ring. If I drop certain normalizations I could also work over QQ, I think.
The above operations appear sequentially. Although there are some options for parallelizing these at least in part, the biggest acceleration would be achieved if the multiplication itself could be parallelized. Is there a way to do this in Sage?
Greetings and thank you very much.
Edit: Work in progress
Below I put some code that I produced so far which is very far from a solution but produces new slow-downs that I do not understand. The strategy is to save a sparse matrix by its nonzero rows which is facilitated by the following code snippet:
def default_to_row(M):
rows_of_M=M.rows()
relevant=[]
for r in range(len(rows_of_M)):
row=rows_of_M[r]
if dot_sparse(row,row)!=0:
relevant.append([r,row])
return relevant
The row*vector multiplications are facilitated by the function
# indexed_row of type [i,v] where v is a sparse vector and i is
# the row in the matrix it corresponds to.
def mult_sparse(indexed_row,v2):
res=0
for k in indexed_row[1].dict():
res=res+indexed_row[1][k]*v2[k]
return [indexed_row[0],res]
and regular vector multiplication is given by
def dot_sparse(v,w):
res=0
for k in v.dict():
res=res+v[k]*w[k]
return res
Now to the actual benchmarks:
import concurrent.futures
import time
import itertools
dim=5000
M=random_matrix(QQ,nrows=dim,ncols=dim,sparse=True,density=0.0001)
M_by_rows=default_to_row(M)
v=vector(QQ,dim,list(range(dim)),sparse=True)
print(len(M_by_rows))
Using the concurrent.futures library yields the following performance:
start = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor() as executor:
start2 = time.perf_counter()
results = executor.map(mult_sparse,M_by_rows,itertools.repeat(v) )
finish2 = time.perf_counter()
coefficients={}
for result in results:
if result[1]!=0:
coefficients[result[0]]=result[1]
new_v=vector(QQ,v.degree(),coefficients,sparse=True)
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} seconds with Pooling.')
print(f'The executor alone took {round(finish2-start2, 2)} seconds.')
where the dimensions are adjusted so that the computation does not exceed my RAM capacities because every spawned subprocess takes copies (at least of v). A comparison with serial approaches, both the standard one and the serial version of my particular straegy, show that the parallel version is significantly slower (18 and 7 seconds in the above outputs) while the serial versions do not differ (0.02 seconds in both cases):
start = time.perf_counter()
new_v=M*v
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} second(s) for the regular computation')
start = time.perf_counter()
coefficients={}
for row in M_by_rows:
coefficients[row[0]]=mult_sparse(row,v)[1]
new_v=vector(QQ,v.degree(),coefficients,sparse=True)
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} second(s) in a serial computation')
print(f'Check if the vectors coincide:')
test=new_v-M*v
n=dot_sparse(test,test)
if n==0:
print(True)
else:
print(False)Robin_FTue, 26 May 2020 18:00:39 +0200https://ask.sagemath.org/question/51560/Python MultiProcessing module stuck for long outputshttps://ask.sagemath.org/question/51533/python-multiprocessing-module-stuck-for-long-outputs/I am using SageMath 9.0 and I tried to do paralle computation in two ways
1) parallel decoration built in SageMath;
2) the MultiProcessing module of Python.
When using paralell decoration, everything works fine. When using MultiProcessing module for the same problem with the same input, everything works fine for short output, but there is a problem when the output is long. SageMath gets stuck after the computation if the output is long. I monitored the CPU usage, it peaks at first and then returns to zero, which means that the computation is complete. However, the output still does not appear.
What puzzles me is that the problem depends on the length of output, not the time of computation. For the same computation, once I add a line to manually set the output to be something short, or extract a small part of the original output, then the computation no longer gets stck in the end, and that small part agrees with the original answer.
I would like to know if there is any hidden parameter to prevent the Python MultiProcessing module from producing long outputs in Sagemath.
P.S. Following the suggestion of @tmonteil, I attached my code with an example.
def f(n):
return 2 ^ (n ^ n)
def g(n):
return factor(2 ^ (n ^ n))
def run(function, parameters):
from multiprocessing import Process, Queue
def target_function(x, queue):
queue.put(function(*x))
results = list()
if __name__ == "__main__":
queue = Queue()
processes = [Process(target=target_function, args=(i, queue)) for i in parameters]
for p in processes:
p.start()
for p in processes:
p.join()
results = [queue.get() for p in processes]
return results
On my computer, the command
run(f,[(3,),(4,),(5,),(6,)])
works fine but the command
run(f,[(7,)])
gets stuck. On the other hand, the command
run(g,[(3,),(4,),(5,),(6,),(7,),(8,),(9,),(10,)])
works fine. Note that the function g does strictly more jobs than f, but its answers are much shorter.dzhy444Sun, 24 May 2020 15:38:40 +0200https://ask.sagemath.org/question/51533/getting RESetMapReduce to use multiple coreshttps://ask.sagemath.org/question/50208/getting-resetmapreduce-to-use-multiple-cores/I am reading the [documentation](https://doc.sagemath.org/html/en/reference/parallel/sage/parallel/map_reduce.html#protocol-description) on how to parallelize recursive-enumeration computations, and I cannot for the life of me get `RESetMapReduce` to use more than one core of my machine.
I am using the example immediately before the bulleted item titled **Generating series** on that page, with a minor modification:
sage: from sage.parallel.map_reduce import RESetMapReduce
sage: S = RESetMapReduce(
....: roots=[[]],
....: children=lambda l: [l + [0], l + [1]] if len(l) < 32 else [],
....: map_function=lambda x: 1,
....: reduce_function=lambda x, y: x + y,
....: reduce_init=0)
sage: S.run()
I changed the length to 32 to make the computation heftier. If I run this and watch the process with `htop` I can see it using only one of my 8 cores to `100%` capacity, ignoring the others.
I have even tried passing the argument `max_proc=8` to the `.run()` method, to no effect.grobberTue, 10 Mar 2020 04:05:58 +0100https://ask.sagemath.org/question/50208/Parallel computation for different functionshttps://ask.sagemath.org/question/50013/parallel-computation-for-different-functions/For a single function with a list of inputs, the @parallel decoration can be used to do parallel computation. I am wandering whether it is possible to do parallel computation for different functions.
A simple is example is to calculate the difference f-g of two independent functions f and g. How can I ask SageMath to simultaneously compute the values of f and g, and then calculate the difference?
My time measurements suggest that when calculating the difference f-g, SageMath actually calculates f and g one after another, and then take the difference.
I can think of a naive approach using the @parallel decoration. I can create a function whose input variable is the name of the functions I want to compute simultaneously. Then I use a list of function names as input to get a generator of outputs. This may work if the final result doesn't depend on the order of the outputs, but does not work if the order matters, for example when taking the difference.
In general, suppose I have a flow chart for the computation, in other words, I already know which jobs can be parallelized and how the information flows to the next stage. Then what is the best way to implement the flow chart?dzhy444Fri, 21 Feb 2020 08:37:16 +0100https://ask.sagemath.org/question/50013/Cython problem with map-reducehttps://ask.sagemath.org/question/46839/cython-problem-with-map-reduce/Below some minimal code using map-reduce:
# %attach SAGE/test.spyx
from sage.parallel.map_reduce import RESetMapReduce
cpdef test(int n):
S = RESetMapReduce(roots = [[]],children = lambda l: [l+[0], l+[1]] if len(l) <= n else [],map_function = lambda x : 1,reduce_function = lambda x,y: x+y,reduce_init = 0)
return S.run()
whose compilation reveals a Cython problem `Error compiling Cython file` , `AttributeError: 'ClosureScope' object has no attribute 'scope_class'`, see below:
┌────────────────────────────────────────────────────────────────────┐
│ SageMath version 8.3, Release Date: 2018-08-03 │
│ Type "notebook()" for the browser-based notebook interface. │
│ Type "help()" for help. │
└────────────────────────────────────────────────────────────────────┘
sage: %attach SAGE/test.spyx
Compiling ./SAGE/test.spyx...
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-506a10c79ce0> in <module>()
----> 1 get_ipython().magic(u'attach SAGE/test.spyx')
/opt/sagemath-8.3/local/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in magic(self, arg_s)
2158 magic_name, _, magic_arg_s = arg_s.partition(' ')
2159 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2160 return self.run_line_magic(magic_name, magic_arg_s)
2161
2162 #-------------------------------------------------------------------------
/opt/sagemath-8.3/local/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in run_line_magic(self, magic_name, line)
2079 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2080 with self.builtin_trap:
-> 2081 result = fn(*args,**kwargs)
2082 return result
2083
<decorator-gen-115> in attach(self, s)
/opt/sagemath-8.3/local/lib/python2.7/site-packages/IPython/core/magic.pyc in <lambda>(f, *a, **k)
186 # but it's overkill for just that one bit of state.
187 def magic_deco(arg):
--> 188 call = lambda f, *a, **k: f(*a, **k)
189
190 if callable(arg):
/opt/sagemath-8.3/local/lib/python2.7/site-packages/sage/repl/ipython_extension.pyc in attach(self, s)
156 sage: shell.quit()
157 """
--> 158 return self.shell.ex(load_wrap(s, attach=True))
159
160 @line_magic
/opt/sagemath-8.3/local/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in ex(self, cmd)
2421 """Execute a normal python statement in user namespace."""
2422 with self.builtin_trap:
-> 2423 exec(cmd, self.user_global_ns, self.user_ns)
2424
2425 def ev(self, expr):
<string> in <module>()
/opt/sagemath-8.3/local/lib/python2.7/site-packages/sage/repl/load.pyc in load(filename, globals, attach)
265 if attach:
266 add_attached_file(fpath)
--> 267 exec(load_cython(fpath), globals)
268 elif ext == '.f' or ext == '.f90':
269 from sage.misc.inline_fortran import fortran
/opt/sagemath-8.3/local/lib/python2.7/site-packages/sage/repl/load.pyc in load_cython(name)
65 """
66 from sage.misc.cython import cython
---> 67 mod, dir = cython(name, compile_message=True, use_cache=True)
68 import sys
69 sys.path.append(dir)
/opt/sagemath-8.3/local/lib/python2.7/site-packages/sage/misc/cython.pyc in cython(filename, verbose, compile_message, use_cache, create_local_c_file, annotate, sage_namespace, create_local_so_file)
636 You can fix your code by adding "from {} cimport {}".
637 """.format(pxd, name))
--> 638 raise RuntimeError(cython_messages.strip())
639
640 if verbose >= 0:
RuntimeError: Error compiling Cython file:
------------------------------------------------------------
...
# %attach SAGE/test.spyx
from sage.parallel.map_reduce import RESetMapReduce
cpdef test(int n):
^
------------------------------------------------------------
_home_sage_SAGE_test_spyx_0.pyx:5:6: closures inside cpdef functions not yet supported
Error compiling Cython file:
------------------------------------------------------------
...
# %attach SAGE/test.spyx
from sage.parallel.map_reduce import RESetMapReduce
cpdef test(int n):
S = RESetMapReduce(roots = [[]],children = lambda l: [l+[0], l+[1]] if len(l) <= n else [],map_function = lambda x : 1,reduce_function = lambda x,y: x+y,reduce_init = 0)
^
------------------------------------------------------------
_home_sage_SAGE_test_spyx_0.pyx:6:44: Compiler crash in CreateClosureClasses
ModuleNode.body = StatListNode(_home_sage_SAGE_test_spyx_0.pyx:3:0)
StatListNode.stats[1] = CFuncDefNode(_home_sage_SAGE_test_spyx_0.pyx:5:6,
args = [...]/1,
modifiers = [...]/0,
needs_closure = True,
overridable = 1,
visibility = u'private')
CFuncDefNode.body = StatListNode(_home_sage_SAGE_test_spyx_0.pyx:6:1,
is_terminator = True)
StatListNode.stats[0] = SingleAssignmentNode(_home_sage_SAGE_test_spyx_0.pyx:6:19)
SingleAssignmentNode.rhs = GeneralCallNode(_home_sage_SAGE_test_spyx_0.pyx:6:19,
is_temp = 1,
result_is_used = True,
use_managed_ref = True)
GeneralCallNode.keyword_args = DictNode(_home_sage_SAGE_test_spyx_0.pyx:6:20,
is_dict_literal = True,
is_temp = 1,
obj_conversion_errors = [...]/0,
reject_duplicates = True,
result_is_used = True,
use_managed_ref = True)
DictNode.key_value_pairs[1] = DictItemNode(_home_sage_SAGE_test_spyx_0.pyx:6:33,
result_is_used = True,
use_managed_ref = True)
DictItemNode.value = LambdaNode(_home_sage_SAGE_test_spyx_0.pyx:6:44,
args = [...]/1,
binding = True,
is_temp = 1,
lambda_name = 'lambda',
name = u'<lambda>',
needs_closure = True,
needs_self_code = True,
pymethdef_cname = u'__pyx_mdef_27_home_sage_SAGE_test_spyx_0_4test_lambda',
result_is_used = True,
use_managed_ref = True)
Compiler crash traceback from this point on:
File "Cython/Compiler/Visitor.py", line 180, in Cython.Compiler.Visitor.TreeVisitor._visit
return handler_method(obj)
File "/opt/sagemath-8.3/local/lib/python2.7/site-packages/Cython/Compiler/ParseTreeTransforms.py", line 2764, in visit_LambdaNode
self.create_class_from_scope(node.def_node, self.module_scope, node)
File "/opt/sagemath-8.3/local/lib/python2.7/site-packages/Cython/Compiler/ParseTreeTransforms.py", line 2713, in create_class_from_scope
func_scope.scope_class = cscope.scope_class
AttributeError: 'ClosureScope' object has no attribute 'scope_class'Sébastien PalcouxThu, 06 Jun 2019 22:51:24 +0200https://ask.sagemath.org/question/46839/How to use a GPU for HPC?https://ask.sagemath.org/question/46754/how-to-use-a-gpu-for-hpc/From a sage code parallelized with map_reduce, what should be done for the computation to use a GPU (for high performance computing)?Sébastien PalcouxSat, 01 Jun 2019 04:43:25 +0200https://ask.sagemath.org/question/46754/Is map_reduce working for parallel computing? How?https://ask.sagemath.org/question/46711/is-map_reduce-working-for-parallel-computing-how/The following computation was done on a computer with 16 CPUs.
![image description](/upfiles/15592161891060146.png)
sage: seeds = [[]]
....: succ = lambda l: [l+[0], l+[1]] if len(l) <= 22 else []
....: S = RecursivelyEnumeratedSet(seeds, succ,structure='forest', enumeration='depth')
....: map_function = lambda x: 1
....: reduce_function = lambda x,y: x+y
....: reduce_init = 0
....: %time S.map_reduce(map_function, reduce_function, reduce_init)
....:
CPU times: user 15 ms, sys: 47 ms, total: 62 ms
Wall time: 58.4 s
16777215
But it seems that the computation did not exploit the CPUs in parallel, as the following screenshot show.
**Question**: What's wrong? How to exploit the CPUs in parallel?
![image description](/upfiles/15592157095194797.png)Sébastien PalcouxThu, 30 May 2019 13:39:40 +0200https://ask.sagemath.org/question/46711/How to implement a parallelization?https://ask.sagemath.org/question/46682/how-to-implement-a-parallelization/Consider a code of the kind *tree exploration amassing fruits*: when the procedure arrives to a new node of the tree, if it is a leaf a possible fruit is collected, else a computation is done to determine the children of the node. I would like to parallelize as follows: the children are allocated to all the available CPUs, each CPU has a queue and a given child is allocated to the CPU with the smallest queue.
It seems to be a generic way to parallelize such a tree exploration.
**Question**: How to implement such a parallelization?
In addition, how to use GPU (for HPC)?
The code has the following form:
cpdef function(list L1, list L2):
cdef int i,n #...
cdef list LL1,LL2 #...
#...
# core of the code
#...
n= #...
for i in range(n):
LL1= #...
LL2= #...
function(LL1,LL2)Sébastien PalcouxMon, 27 May 2019 07:14:24 +0200https://ask.sagemath.org/question/46682/Fast numerical computation of eigenvalueshttps://ask.sagemath.org/question/44531/fast-numerical-computation-of-eigenvalues/I am trying to calculate the eigenvalues of a large (say, n=1000 to 10000) sparse Hermitian square matrix using SAGE
> numpy.linalg.eigvalsh(MyMatrix)
takes very long. I noticed it utilizes only a single core of my CPU.
How would one go about speeding the calculation? Specifically, I'm looking for a solution using parallel computation, or maybe something which is "more compiled".
Thank you.SageMathematicianSat, 01 Dec 2018 19:10:10 +0100https://ask.sagemath.org/question/44531/