Ask Your Question

Robin_F's profile - activity

2023-05-14 06:58:31 +0200 received badge  Famous Question (source)
2023-04-22 01:23:24 +0200 received badge  Notable Question (source)
2023-03-30 15:15:12 +0200 received badge  Popular Question (source)
2020-06-03 16:58:31 +0200 commented question Parallelizing symbolic matrix*vector computations for sparse matrices and vectors

I will, once I have a solution to the problem. For now I will experiment with different multiprocessing tools, the first attempt is described above and was highly unsuccesful.

2020-06-03 16:57:09 +0200 received badge  Editor (source)
2020-06-03 13:29:17 +0200 commented question Parallelizing symbolic matrix*vector computations for sparse matrices and vectors

It doesn't appear to be the case that the denominators are growing but I will check this more thoroughly, thanks a lot for this hint. Towards the type of parallelization: I really need to do one M*v operation at the time because the next steps depend on the result. I figured that splitting the matrix M into n lists of sparse row vectors, where n is the number of cores could potentially work. Then I can iterate over the lists in parallel so it should take roughly 1/n times the comptation time. The fortunate thing is that the matrices M do not change throughout the entire computation.

2020-05-28 18:29:31 +0200 received badge  Student (source)
2020-05-28 12:30:24 +0200 commented question What is the most useful basefield for finite matrix groups?

This question about finiteness kind of turned up already here: https://ask.sagemath.org/question/38403/check-if-a-finitely-generated-matrix-group-is-finite-works-with-qq-and-not-with-cc/ (Check if a finiteley generated matrix group is finite). There it is pointed out that it wouldn't work over floating point fields like CC but the case of number fields is not really addressed.

2020-05-28 12:04:56 +0200 commented question Parallelizing symbolic matrix*vector computations for sparse matrices and vectors

As I said, it may be possible to normalize everything such that QQ works. So for now, just assume the entries to be in QQ.

2020-05-28 12:01:23 +0200 commented question What is the most useful basefield for finite matrix groups?

Thank you for the quick reply. Working over numberfields actually makes it possible to enter the matrices in a somewhat useful form. The main problem is not solved, however. Sage still doesn't realize that this group is in fact finite.

z = QQ['z'].0
K = NumberField(z^2 - 3,'s'); K
s=K.gen()
r1=matrix(K,[[-1/2,-1/2*s],[1/2*s,-1/2]])
r2=matrix(K,[[-1/2,1/2*s],[-1/2*s,-1/2]])
rotations_new=[r1,r2]
gens=rotations_new
G2=MatrixGroup(gens)
G2.order()
2020-05-26 19:32:40 +0200 asked a question Parallelizing symbolic matrix*vector computations for sparse matrices and vectors

Hi everyone, I am currently running computations which involve a lot of operations of the type matrix*vector, where both objects are sparse and the result can be expected to be sparse as well. The matrices' base field is the symbolic ring. If I drop certain normalizations I could also work over QQ, I think. The above operations appear sequentially. Although there are some options for parallelizing these at least in part, the biggest acceleration would be achieved if the multiplication itself could be parallelized. Is there a way to do this in Sage?

Greetings and thank you very much.

Edit: Work in progress

Below I put some code that I produced so far which is very far from a solution but produces new slow-downs that I do not understand. The strategy is to save a sparse matrix by its nonzero rows which is facilitated by the following code snippet:

def default_to_row(M):
rows_of_M=M.rows()
relevant=[]
for r in range(len(rows_of_M)):
    row=rows_of_M[r]
    if dot_sparse(row,row)!=0:
        relevant.append([r,row])
return relevant

The row*vector multiplications are facilitated by the function

# indexed_row of type [i,v] where v is a sparse vector and i is 
# the row in the matrix it corresponds to.
def mult_sparse(indexed_row,v2):
    res=0
    for k in indexed_row[1].dict():
        res=res+indexed_row[1][k]*v2[k]

    return [indexed_row[0],res]

and regular vector multiplication is given by

def dot_sparse(v,w):
res=0
for k in v.dict():
    res=res+v[k]*w[k]
return res

Now to the actual benchmarks:

import concurrent.futures
import time
import itertools
dim=5000
M=random_matrix(QQ,nrows=dim,ncols=dim,sparse=True,density=0.0001)
M_by_rows=default_to_row(M)
v=vector(QQ,dim,list(range(dim)),sparse=True)
print(len(M_by_rows))

Using the concurrent.futures library yields the following performance:

start = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor() as executor:
    start2 = time.perf_counter()
    results = executor.map(mult_sparse,M_by_rows,itertools.repeat(v) )
    finish2 = time.perf_counter()
    coefficients={}
    for result in results:
        if result[1]!=0:
            coefficients[result[0]]=result[1]
    new_v=vector(QQ,v.degree(),coefficients,sparse=True)

finish = time.perf_counter()

print(f'Finished in {round(finish-start, 2)} seconds with Pooling.')
print(f'The executor alone took {round(finish2-start2, 2)} seconds.')

where the dimensions are adjusted so that the computation does not exceed my RAM capacities because every spawned subprocess takes copies (at least of v). A comparison with serial approaches, both the standard one and the serial version of my particular straegy, show that the parallel version is significantly slower (18 and 7 seconds in the above outputs) while the serial versions do not differ (0.02 seconds in both cases):

start = time.perf_counter()
new_v=M*v
finish = time.perf_counter()

print(f'Finished in {round(finish-start, 2)} second(s) for the regular computation')
start = time.perf_counter()
coefficients={}
for row in M_by_rows:
    coefficients[row[0]]=mult_sparse(row,v)[1]
new_v=vector(QQ,v.degree(),coefficients,sparse=True)    
finish = time.perf_counter()

print(f'Finished in {round(finish-start, 2)} second(s) in a serial computation')

print(f'Check if the vectors coincide:')
test=new_v-M*v
n=dot_sparse(test,test)
if n==0:
    print(True)
else:
    print(False)
2020-05-26 19:32:40 +0200 asked a question What is the most useful basefield for finite matrix groups?

Hi everyone, I am trying to write a notebook that does some elementary computations in a finite matrix group like conjugacy classes and character tables. It is meant to be used in cases where the group is realized explicitly as some subgroup of GL(3,R) or GL(2,R). My problem now is that if I want to implement rotations like

rotations=[]
for t in [1/3,2/3]:
    rotations.append(matrix([[cos(2*pi*t),-sin(2*pi*t)],[sin(2*pi*t),cos(2*pi*t)]]))

the base ring will be the symbolic ring SR because there is a sqrt(2) in some denominators. If I then try to compute the characters via

gens=rotations
G2=MatrixGroup(gens)
G2.character_table()

I will get an error message. My assumption is that because of the base ring SR, Sage doesn't realize that this is in fact a finite group. What would a suitable base field be to make this computation possible?

Thanks in advance.