ASKSAGE: Sage Q&A Forum - RSS feedhttps://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Thu, 22 Oct 2020 15:55:53 +0200Determinant of large sparse symbolic matriceshttps://ask.sagemath.org/question/54004/determinant-of-large-sparse-symbolic-matrices/ I am working on a symbolic circuit simulation program for electronic circuits (SLiCAP). I have a matlab (MuPAD) and a python version available.
The key task of such a program is the calculation of the determinant of sparse matrices with symbolic entries. The MuPAD (MATLAB symbolic toolbox) version calculates the determinant of a sparse matrix (dim = 52x52) with one symbolic variable (the Laplace variable) in about one minute (minor expansion, algorithm unknown). The Python version uses maxima and the newdet method (Gentleman-Johnson algorithm). This method is limited to dim=50x50, but I had to reduce the size to (30x30) because of memory paging errors reported by Lisp.
Now I would like to try SageMath with the "df" algorithm for this purpose, but its running more then 30 minutes ...
I know the Gentleman-Johnson method is included in PyNAC but it doesn't seem to be included in the sage wrapper.
My questions:
1. Can SageMath be forced to use the Gentleman-Johnson algorithm included in PyNAC?
2. If not can a wrapper be build to do this?
3. If so, can someone help we with this?
Thanks in advance for repying!
written so that it can be used?
SLiCAPThu, 22 Oct 2020 15:55:53 +0200https://ask.sagemath.org/question/54004/Coefficients of a polynomial including zero coefficientshttps://ask.sagemath.org/question/52632/coefficients-of-a-polynomial-including-zero-coefficients/I want to obtain the list of coefficients of a polynomials,
including the zero coefficients.
I tried this
R.<x> = QQ[]
f = x^4 - x^2 + 1
A = f.coefficients()
A
This gave the answer `[1, -1, 1]`.
But I want to get `[1, 0, -1, 0, 1]`.LaurianThu, 23 Jul 2020 09:14:56 +0200https://ask.sagemath.org/question/52632/Parallelizing symbolic matrix*vector computations for sparse matrices and vectorshttps://ask.sagemath.org/question/51560/parallelizing-symbolic-matrixvector-computations-for-sparse-matrices-and-vectors/Hi everyone,
I am currently running computations which involve a lot of operations of the type matrix*vector, where both objects are sparse and the result can be expected to be sparse as well. The matrices' base field is the symbolic ring. If I drop certain normalizations I could also work over QQ, I think.
The above operations appear sequentially. Although there are some options for parallelizing these at least in part, the biggest acceleration would be achieved if the multiplication itself could be parallelized. Is there a way to do this in Sage?
Greetings and thank you very much.
Edit: Work in progress
Below I put some code that I produced so far which is very far from a solution but produces new slow-downs that I do not understand. The strategy is to save a sparse matrix by its nonzero rows which is facilitated by the following code snippet:
def default_to_row(M):
rows_of_M=M.rows()
relevant=[]
for r in range(len(rows_of_M)):
row=rows_of_M[r]
if dot_sparse(row,row)!=0:
relevant.append([r,row])
return relevant
The row*vector multiplications are facilitated by the function
# indexed_row of type [i,v] where v is a sparse vector and i is
# the row in the matrix it corresponds to.
def mult_sparse(indexed_row,v2):
res=0
for k in indexed_row[1].dict():
res=res+indexed_row[1][k]*v2[k]
return [indexed_row[0],res]
and regular vector multiplication is given by
def dot_sparse(v,w):
res=0
for k in v.dict():
res=res+v[k]*w[k]
return res
Now to the actual benchmarks:
import concurrent.futures
import time
import itertools
dim=5000
M=random_matrix(QQ,nrows=dim,ncols=dim,sparse=True,density=0.0001)
M_by_rows=default_to_row(M)
v=vector(QQ,dim,list(range(dim)),sparse=True)
print(len(M_by_rows))
Using the concurrent.futures library yields the following performance:
start = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor() as executor:
start2 = time.perf_counter()
results = executor.map(mult_sparse,M_by_rows,itertools.repeat(v) )
finish2 = time.perf_counter()
coefficients={}
for result in results:
if result[1]!=0:
coefficients[result[0]]=result[1]
new_v=vector(QQ,v.degree(),coefficients,sparse=True)
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} seconds with Pooling.')
print(f'The executor alone took {round(finish2-start2, 2)} seconds.')
where the dimensions are adjusted so that the computation does not exceed my RAM capacities because every spawned subprocess takes copies (at least of v). A comparison with serial approaches, both the standard one and the serial version of my particular straegy, show that the parallel version is significantly slower (18 and 7 seconds in the above outputs) while the serial versions do not differ (0.02 seconds in both cases):
start = time.perf_counter()
new_v=M*v
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} second(s) for the regular computation')
start = time.perf_counter()
coefficients={}
for row in M_by_rows:
coefficients[row[0]]=mult_sparse(row,v)[1]
new_v=vector(QQ,v.degree(),coefficients,sparse=True)
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} second(s) in a serial computation')
print(f'Check if the vectors coincide:')
test=new_v-M*v
n=dot_sparse(test,test)
if n==0:
print(True)
else:
print(False)Robin_FTue, 26 May 2020 18:00:39 +0200https://ask.sagemath.org/question/51560/ChainComplex() runs 24 times slower than homology()https://ask.sagemath.org/question/44101/chaincomplex-runs-24-times-slower-than-homology/I load a list of matrices `bdrs` representing a chain complex, their dimensions are
{1, 21, 210, 1330, 5985, 20349, 54264, 116280, 203490, 293930, 352716, 352716, 293930, 203490, 116280, 54264, 20349, 5985, 1330, 210, 21, 1}, and the largest has density 3.91*10^-6. In total they take up 50MB of disk space. This finishes in 63sec.
When I run `chcx=ChainComplex(bdrs,base_ring=GF(2))`, it takes 7hrs20min, but `chcx.homology()` finishes in only 18min. **Why does it take so long to just store a few matrices?** At first I thought that `ChainComplex()` also does some simplifications/reductions, but `[chcx.free_module_rank(i) for i in range(0,21)]` shows the original dimensions of matrices :/.
**Is there a faster way to compute the homology** of a chain complex (over $\mathbb{Z}$ or $\mathbb{Z}_p$)?LeonSun, 28 Oct 2018 09:55:39 +0100https://ask.sagemath.org/question/44101/Multiplying sparse matriceshttps://ask.sagemath.org/question/43979/multiplying-sparse-matrices/Can Sage multiply sparse matrices? If so, does it require some special syntax? I've been trying to run the following code:
M = Matrix(QQ, 1000001, 1000001, {(3,2): 27, (2,98): 71})
M2 = M*M
and it hasn't been working. If M is being converted to a dense matrix for multiplication then of course I wouldn't expect this to run. But it shouldn't be hard to do as sparse matrices, right?PWThu, 18 Oct 2018 20:49:55 +0200https://ask.sagemath.org/question/43979/Sparse morphism versus sparse matrixhttps://ask.sagemath.org/question/10543/sparse-morphism-versus-sparse-matrix/I would like to use VectorSpaceMorphism when the underlying domain, codomain and matrix are sparse. However, the morphism seems to convert everything to dense. The following is from Sage 5.9.
sage: V = VectorSpace(QQ, 2, sparse=True)
sage: V
Sparse vector space of dimension 2 over Rational Field
sage: A = matrix(QQ, 2, 2, [[5, 6], [7, 8]], sparse=True)
sage: type(A)
sage.matrix.matrix_rational_sparse.Matrix_rational_sparse
sage: f = V.hom(A, V)
sage: f.matrix()
[5 6]
[7 8]
sage: type(f.matrix())
sage.matrix.matrix_rational_dense.Matrix_rational_dense
V.hom doesn't take the keyword 'sparse'.
sage: g = V.hom(A, V, sparse=True)
...
TypeError: hom() got an unexpected keyword argument 'sparse'
Let v be a vector in V, which is sparse. Tests with pdb show that when we call f(v), it converts v to dense behind the scenes, multiplies by the dense matrix, and
converts the dense vector result back to sparse. Of course this is slow. The more fundamental issue is that, in some of my applications, A will be too large to store densely.mmcconnellTue, 17 Sep 2013 01:08:35 +0200https://ask.sagemath.org/question/10543/Read sparse matrix in Harwell-Boeing Exchange Formathttps://ask.sagemath.org/question/9879/read-sparse-matrix-in-harwell-boeing-exchange-format/I'm currently creating some really large matrices, and hope they will end up reasonably sparse. I would like to toy with them in sage some day. Looking for a suitable standardized storage format, I just learned about the [Harwell-Boeing Exchange Format](http://math.nist.gov/MatrixMarket/formats.html#hb). **Is there some functionality provided by sage to read such files?**
If not, should I rather read the file and construct a matrix from it using a standard matrix constructor, or should I create my own matrix subclass whith an internal format that closely resembles the file format?MvGWed, 06 Mar 2013 17:06:33 +0100https://ask.sagemath.org/question/9879/out of core exact linear algebrahttps://ask.sagemath.org/question/8810/out-of-core-exact-linear-algebra/I'm trying to solve a large (17000x250000) sparse (density=0.01) system over the rationals. Are there any routines in Sage (or out) that could give the reduced row echelon form, or the null space?
Everything I've looked into either runs out of memory or isn't exact.tagitagiWed, 21 Mar 2012 17:21:05 +0100https://ask.sagemath.org/question/8810/Efficient kernel-computing of sparse GF2 matriceshttps://ask.sagemath.org/question/7841/efficient-kernel-computing-of-sparse-gf2-matrices/To compute the kernel of a sparse GF(2) matrix A I simply do
A.kernel()
But for (highly) sparse GF(2) matrices there are (many) much faster algorithm to compute the kernel, surely some of them are implemented in sage, but by which command I apply them? (Tutorial and online help could'nt answer that)MustafaSun, 09 Jan 2011 22:30:58 +0100https://ask.sagemath.org/question/7841/