ASKSAGE: Sage Q&A Forum - RSS feedhttps://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Sun, 14 May 2023 17:03:42 +0200Is there a way to optimize this equation-solving code to run in a reasonable timeframe?https://ask.sagemath.org/question/68436/is-there-a-way-to-optimize-this-equation-solving-code-to-run-in-a-reasonable-timeframe/ I have the following lines in Sage:
var('A, B, C, E, α, β, γ, d')
f = lambda n : (
(12*A + 6*B + 4*C + 2*E + 4*α + 2*β + γ)^n
- (11*A + 6*B + 4*C + 2*E + 4*α + 2*β + γ)^n
- (11*A + 5*B + 4*C + 2*E + 4*α + 2*β + γ)^n
+ (10*A + 5*B + 4*C + 2*E + 4*α + 2*β + γ)^n
- (10*A + 5*B + 3*C + 2*E + 4*α + 2*β + γ)^n
+ ( 9*A + 5*B + 3*C + 2*E + 4*α + 2*β + γ)^n
+ ( 9*A + 4*B + 3*C + 2*E + 4*α + 2*β + γ)^n
- ( 8*A + 4*B + 3*C + 2*E + 4*α + 2*β + γ)^n
- ( 8*A + 4*B + 3*C + E + 4*α + 2*β + γ)^n
+ ( 7*A + 4*B + 3*C + E + 4*α + 2*β + γ)^n
+ ( 7*A + 3*B + 3*C + E + 4*α + 2*β + γ)^n
- ( 6*A + 3*B + 3*C + E + 4*α + 2*β + γ)^n
+ ( 6*A + 3*B + 2*C + E + 4*α + 2*β + γ)^n
- ( 6*A + 3*B + 2*C + E + 3*α + 2*β + γ)^n
- ( 6*A + 3*B + 2*C + E + 3*α + β + γ)^n
+ ( 6*A + 3*B + 2*C + E + 2*α + β + γ)^n
- ( 6*A + 3*B + 2*C + E + 2*α + β)^n
+ ( 6*A + 3*B + 2*C + E + α + β)^n
+ ( 6*A + 3*B + 2*C + E + α)^n
- ( 6*A + 3*B + 2*C + E)^n
+ ( 6*A + 3*B + C + E)^n
- ( 5*A + 3*B + C + E)^n
- ( 5*A + 2*B + C + E)^n
+ ( 4*A + 2*B + C + E)^n
+ ( 4*A + 2*B + C)^n
- ( 3*A + 2*B + C)^n
- ( 3*A + B + C)^n
+ ( 2*A + B + C)^n
- ( 2*A + B)^n
+ ( A + B)^n
+ A^n
)/factorial(n)
solve([f(3) == 0, f(4) == 0, f(5) == d], α, β, γ)
I want Sage to solve this system of equations for the three Greek-letter variables in terms of the five English-letter variables. However, when I ask Sage to solve this, it just sits and spins at maximum CPU usage and steadily-increasing RAM usage until I kill the process. I've waited over fifteen minutes without it completing.
Are there any ways to make this more performant?ZLSun, 14 May 2023 17:03:42 +0200https://ask.sagemath.org/question/68436/How to determine expression size before printing?https://ask.sagemath.org/question/66982/how-to-determine-expression-size-before-printing/Hello,
doing:
> expression = 10^10^7*x
takes less than a second
but doing:
> len(str(expression))
takes a lot of time.
I would like to quickly check the size of the output expression and if too large, then throw some exception. Is there a fast way to do it?
If
> expression = 10^10^7
I could use
> expression.ndigits()
but if it contains pi, x etc I cannot come up with any other solution than len(str(..)) which is time consuming.
Thanks :)vesolovskiSat, 18 Mar 2023 19:52:26 +0100https://ask.sagemath.org/question/66982/efficient generation of restricted divisorshttps://ask.sagemath.org/question/66763/efficient-generation-of-restricted-divisors/This question is in the same venue as my [previous one](https://ask.sagemath.org/question/59852/) on generation of unitary divisors.
This time I'm interested in generating all divisors of a given *factored* integer that are below a given bound `B`. For simplicity I will focus on just counting such divisors.
A straightforward way is to use the `divisors()` function:
import itertools
def divbelow_1(n,B):
return sum(1 for _ in itertools.takewhile(lambda d: d<=B, divisors(n)))
The downside is that this function constructs a list of *all* divisors of $n$ (there are $\tau(n)$ of them), which limits its applicability. Still, if this list fits into the memory, `divbelow_1()` works quite fast. I'm interested in the case when $\tau(n)$ is large, but the number of divisors below `B` is moderate. Ideally, I'd like to have a function that have comparable performance to `divbelow_1(n,B)` when `B==n`, and that at the same time avoids generation of divisors above `B`.
My attempt was:
# N = list(factor(n)), a list of pairs (prime,exponent) forming the factorization of n
def divbelow_2(N,B,i=0):
if B <= 1:
return B
if i>=len(N):
return 1
p,e = N[i]
result = 0
for k in (0..e):
result += divbelow_2(N,B,i+1)
B //= p
if B==0:
break
return result
White it looks quite efficient for me from theoretical perspective, it loses badly to `divbelow_1()` in practice. Here is a particular example:
sage: %time divbelow_1(47987366728745697656468743117440000, 61448226992991993)
CPU times: user 4.73 s, sys: 368 ms, total: 5.1 s
Wall time: 5.1 s
10074213
sage: %time divbelow_2(list(factor(47987366728745697656468743117440000)), 61448226992991993)
CPU times: user 1min 31s, sys: 7.89 ms, total: 1min 31s
Wall time: 1min 31s
10074213
I can cope with 2-3 fold slowdown, but not with 18-fold one. So, my question is:
> Q: Why `divbelow_2()` is so slow, and if there is there a room for improvement? (without using `divisors()`)
**ADDED**. I suspect the drastic slowdown is caused by the recursion overhead. Here is a non-recursive version that generates the list of all divisors below `B`:
def divbelow_7(n,B):
div = [1]
for p, e in factor(n):
ext = div
for i in range(e):
ext = [r for d in ext if (r := d * p) <= B]
div.extend(ext)
return len(div)
On the above example it even beats `divbelow_1()`:
sage: %time divbelow_7(47987366728745697656468743117440000, 61448226992991993)
CPU times: user 1.97 s, sys: 176 ms, total: 2.15 s
Wall time: 2.15 s
10074213Max AlekseyevFri, 03 Mar 2023 19:13:01 +0100https://ask.sagemath.org/question/66763/two ways to extend a list with drastically different performancehttps://ask.sagemath.org/question/66807/two-ways-to-extend-a-list-with-drastically-different-performance/ This is spin-off from my [previous question](https://ask.sagemath.org/question/66763), where we have a function:
def divbelow_5(n,B):
div = [1]
for p, e in factor(n):
pn = 1
prev = div.copy()
for i in range(e):
pn *= p
div.extend(r for d in prev if (r := d * pn) <= B)
return len(div)
I've decided to optimize and beautify it a bit into:
def divbelow_6(n,B):
div = [1]
for p, e in factor(n):
L = len(div)
div.extend(r for i in range(L) for k in (1..e) if (r := div[i] * p^k) <= B)
return len(div)
I'd expect `divbelow_6` to have comparable (if not better) performance than `divbelow_5`, but the reality puzzled me a lot:
sage: %time divbelow_5(47987366728745697656468743117440000, 61448226992991993)
CPU times: user 2.22 s, sys: 240 ms, total: 2.46 s
Wall time: 2.46 s
10074213
sage: %time divbelow_6(47987366728745697656468743117440000, 61448226992991993)
CPU times: user 1min 39s, sys: 326 ms, total: 1min 39s
Wall time: 1min 39s
10074213
Why `divbelow_6` is so much slower than `divbelow_5` ?Max AlekseyevSun, 05 Mar 2023 15:05:32 +0100https://ask.sagemath.org/question/66807/Growing span of vectorshttps://ask.sagemath.org/question/66394/growing-span-of-vectors/I'm trying to generate a basis of a vector space that satisfies certain properties. The method I'm using is kind of randomized, but fast, the bottleneck at the moment is how to maintain a structure that serves to check if the current element is in the basis or not. In code it would be something like this:
while V.dimension() < n:
w = random vector
if has_good_property(w) and w not in V:
v += span([w])
This code recomputes the echelonized basis of V on each iteration and I feel that it could be made more efficient.
Any suggestions?
quimeyTue, 14 Feb 2023 22:05:14 +0100https://ask.sagemath.org/question/66394/The performance speed of "len(str(n))" method and "a^b % p" method in Sagehttps://ask.sagemath.org/question/63916/the-performance-speed-of-lenstrn-method-and-ab-p-method-in-sage/
Hi.
Using the same Jupyter Notebook, I've found that identical codes between SageMath and Python turns out that
the running time of SageMath shows up much faster than the same code written in Python.
Those codes are as follows.
import timeit
def modular():
return 3**10000000 % (2**127-1)
def countlength():
return len(str(3**10000000))
print(timeit.timeit(modular, number=1))
print(timeit.timeit(countlength, number=1))
In SageMath, the running time is less than 1 second for both functions.
On the other hand, the running time for "modular' is 5 times slower in Python and for the "countlengh" function,
there isn't even an answer from Python.
It makes me curious, what's the magic behind the speed of SageMath?
Recently I'm working on algorithm and coding optimization so any advice from a senior would be grateful.
Thanks.
VibapMon, 05 Sep 2022 19:38:04 +0200https://ask.sagemath.org/question/63916/The performance speed of SageMath Cellhttps://ask.sagemath.org/question/62250/the-performance-speed-of-sagemath-cell/Hi. I'm thinking of embedding SageMath Cell into some personal webpage.
Does the calculation speed of SageMath Cell heavily depends on the performance or specs of personal computer?
If so, is there any way that I can improve the calculation speed of SageMath Cell server?
Thanks.
VibapSun, 01 May 2022 18:42:55 +0200https://ask.sagemath.org/question/62250/efficient generation of unitary divisorshttps://ask.sagemath.org/question/59852/efficient-generation-of-unitary-divisors/A divisor $d\mid n$ is called unitary if it is coprime to its cofactor, i.e. $\gcd(d,\frac{n}d)=1$. Since for each prime power $p^k\| n$, we either have $p^k\|d$ or $p\nmid d$, which lead to the following seemingly-efficient code to generate all unitary divisors:
def unitary_divisors1(n):
return sorted( prod(p^k for p,k in f) for f in Subsets(factor(n)) )
For comparison, the naive approach by filtering the divisors of $n$ would be:
def unitary_divisors2(n):
return [d for d in divisors(n) if gcd(d,n//d)==1]
From theoretical perspective, `unitary_divisors1(n)` is much more efficient than `unitary_divisors2(n)` for powerful/squareful numbers $n$ and should be comparable for square-free numbers $n$. However, in practice `unitary_divisors1(n)` loses badly to `unitary_divisors2(n)` when $n$ is square-free (in which case every divisor of $n$ is unitary):
sage: N = prod(nth_prime(i) for i in (1..20))
sage: %time len(unitary_divisors1(N))
CPU times: user 26.8 s, sys: 64 ms, total: 26.9 s
Wall time: 26.9 s
1048576
sage: %time len(unitary_divisors2(N))
CPU times: user 672 ms, sys: 16 ms, total: 688 ms
Wall time: 688 ms
1048576
In this example with `N` being the product of first 20 primes, `unitary_divisors1(N)` is over 40 times slower than `unitary_divisors2(N)`.
Is there a way to generate unitary divisors that works with efficiency close to `divisors(n)` when $n$ is squarefree or almost squarefree, and takes advantage of the known structure of unitary divisors?Max AlekseyevSat, 20 Nov 2021 21:48:51 +0100https://ask.sagemath.org/question/59852/Integer arithmetic performancehttps://ask.sagemath.org/question/58722/integer-arithmetic-performance/Hi!
I think Python/Sage is really slow with integer arithmetic. I have an example below that can illustrate this.
My question is why is this the case, and is there some way to improve it?
(My guess is this is probably due to memory allocation, i.e., for large integers the operations are not performed in place. But I tried the mutable integer type `xmpz` from the library `gmpy2`, and I don't see any improvement in performance... Also I'm curious how are the integers in Sage handled by default: does it use the GMP or FLINT library?)
---
Here is the example.
Given a list of integers, I want to compute the sum of the products of all its k-combinations. The naive solution is `sum(prod(c) for c in Combinations(vec, k))`. But the performance of this is not very good. Here is a version to do the enumeration of combinations manually.
def sum_c(k, w):
def dfs(k, n):
if k < 1:
ans[0] += pp[0]
else:
for m in range(k, n+1):
pp[k-1] = pp[k] * w[m-1]
dfs(k-1, m-1)
ans, pp = [0], [0]*k + [1]
dfs(k, len(w))
return ans[0]
from time import time
t = time()
sum_c(8, list(range(1, 31)))
print("sum_c:\t", time() - t)
from sage.all import *
t = time()
sum(prod(c) for c in Combinations(range(1, 31), 8))
print("naive:\t", time() - t)
The speed doubled with `sum_c`.
sum_c: 2.283874988555908
naive: 5.798710346221924
But with C this can be computed in less than 0.1s...
---
**Edit.** OK I tried `Cython` and `gmpy2`, and I'm able to get a satisfactory result for my problem.
Still I wonder if there were any way to improve the performance without doing something so "manual"...
And here is my "translation" for reference.
# distutils: libraries = gmp
from gmpy2 cimport *
import numpy as np
import_gmpy2()
cdef extern from "gmp.h":
void mpz_init(mpz_t)
void mpz_init_set_si(mpz_t, long)
void mpz_set_si(mpz_t, long)
void mpz_add(mpz_t, mpz_t, mpz_t)
void mpz_mul_si(mpz_t, mpz_t, long)
cdef dfs(long k, long n, list w, mpz ans, mpz[:] pp):
cdef long m
if k < 1:
mpz_add(ans.z, ans.z, pp[0].z)
else:
for m in range(k, n+1):
mpz_mul_si(pp[k-1].z, pp[k].z, w[m-1])
dfs(k-1, m-1, w, ans, pp)
def sum_c(long k, list w):
cdef long i
cdef mpz ans = GMPy_MPZ_New(NULL)
cdef mpz[:] pp = np.array([GMPy_MPZ_New(NULL) for i in range(k+1)])
mpz_init(ans.z)
for i in range(k):
mpz_init(pp[i].z)
mpz_init_set_si(pp[k].z, 1)
dfs(k, len(w), w, ans, pp)
return int(ans)8d1hSat, 28 Aug 2021 23:35:53 +0200https://ask.sagemath.org/question/58722/Performance of PolynomialRing evaluationhttps://ask.sagemath.org/question/57281/performance-of-polynomialring-evaluation/ Hi,
I am having a performance issue with the evaluation of a PolynomialRing which I find suspicious. I would like to understand if I am doing things correctly here.
The polynomial is calculated as a function of a set of matrices, whose logic is not relevant here.
In particular, the polynomial I am benchmarking has 16 variables and is 15.000 terms long, which means it has (on average) 120.000 multiplications and 15.000 sums for a total of 135.000 operations.
On my laptop, a single evaluation takes 4 seconds, in other words a single operation every 30 us.
This seems weird, given the speed of modern computers and especially compared to the speed of calculating the polynomial - the *representativeSetPoly* in the code. Indeed I was expecting that part to be slower than the verification.
Is there a way I can evaluate the polynomial in a more efficient way (the verification part in the code)? I am attaching a code snippet to reproduce the issue
Thanks in advance!
import time
import numpy as np
from sage.all import *
from tqdm import tqdm
WORKLOAD_DIM = 16
PRIME_NUM = 109211909
workload = np.random.randint(2, size=(2, 50, WORKLOAD_DIM), dtype=np.int8)
variables = ["x" + str(counter) for counter in range(1, WORKLOAD_DIM + 1)]
P = PolynomialRing(GF(PRIME_NUM), variables, order='lex')
def calculateRowPoly(rowP):
poly = 1
for index, elem in enumerate(rowP):
rowMask = [0] * WORKLOAD_DIM
rowMask[index] = 1
poly = poly * (1 - P.monomial(*rowMask) * elem)
return poly
#=== Start finding solution (the polynomial)
representativeSetPoly = 0
t0solve = time.perf_counter()
for row in tqdm(workload[1]):
representativeSetPoly = representativeSetPoly + calculateRowPoly(row)
t1solve = time.perf_counter()
print(f"== It took {t1solve - t0solve} seconds to find a solution!")
#=== End finding solution
print(type(representativeSetPoly))
#=== Start verification
numOrthoVecVerifier = 0
t0verify = time.perf_counter()
for vec in tqdm(workload[0]):
numOrthoVecVerifier += representativeSetPoly(*vec)
t1verify = time.perf_counter()
print(f"== It took {t1verify - t0verify} seconds to verify the solution")
#=== End verification
27amWed, 26 May 2021 10:12:47 +0200https://ask.sagemath.org/question/57281/More system resources for SageMathhttps://ask.sagemath.org/question/56372/more-system-resources-for-sagemath/I am using SageMath under Windows 10 with the jupyter notebook in Firefox. I have installed SageMath using the Windows installer binaries.
I wonder if there are ways to improve the performance of SageMath. I mean, using more CPU and/or RAM to conclude the computations faster.tolgaThu, 25 Mar 2021 08:14:33 +0100https://ask.sagemath.org/question/56372/Why is PolynomialRing over SR much slower than SR?https://ask.sagemath.org/question/56028/why-is-polynomialring-over-sr-much-slower-than-sr/In my code, I was trying to invert a matrix whose entries are symbolic expressions that depend on a single variable (called "tau").
Because of the fact that in my code the dependency on tau is polynomial, at some point I realized, that, in principle, it would be better to use the FractionField over the PolynomialRing in one variable (tau) defined over the symbolic ring (SR) instead of using the symbolic ring with a new variable defined as var("tau"). I was expecting that this would have improved the performances because of the fact that now SageMath knows that there can not be any log(tau) or exp(tau) or something like that.
But, instead, everything became hundreds of times slower. In particular, the method solve_right of the matrix, for a matrix 4 x 4, requires more than half an hour instead of 0.1 seconds. Also, the simplify method seems to be a lot slower.
Do you know why this happens? I have tried to find which algorithm SageMath uses for solving a linear system, but I did not find anything. Does it uses a different algorithm for polynomial and for symbolic expressions? I suspect that for polynomial over a "standard" ring (like for example QQ), SageMath uses Singular while, when solving a system with a matrix on the symbolic ring, it uses Maxima. Therefore, maybe the problem is related with the fact that, when I use a polynomial ring over SR, Sage must use an internal implementation. Do you know if this is the case? And why this also affect the method simplify?
The following snippet generate two matrices with the same content (but using a different ring):
def build_matrix(poly_ring=False):
if poly_ring:
tau_ring = PolynomialRing(SR, 'tau')
tau_field = FractionField(tau_ring)
tau = tau_ring.gen(0)
M = matrix(tau_field, 2, 2)
f = vector(tau_field, [1, 1])
else:
tau = var('tau')
M = matrix(SR, 2, 2)
f = vector(SR, [1, 1])
M[0, 0] = (e^(2/ 3) + 1) * tau + e^2
M[0, 1] = e^-1 * tau + 3
M[1, 0] = tau + 2
M[1, 1] = 3 * tau + e^(1/ 2) + e^4
return M, f
And here there are the benchmark that I see:
m1, f1 = build_matrix(True)
m2, f2 = build_matrix(False)
%timeit m1.inverse()
>> 345 ms ± 14.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit m2.inverse()
>> 2.57 ms ± 52.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit m1.solve_right(f1)
>> 370 ms ± 72.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit m2.solve_right(f2)
>> 2.5 ms ± 21.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Thank you!stepFri, 05 Mar 2021 14:23:18 +0100https://ask.sagemath.org/question/56028/Python kernel seriously slower than SageMath kernelhttps://ask.sagemath.org/question/55851/python-kernel-seriously-slower-than-sagemath-kernel/Hi, I just found the Python kernel is seriously slower than the SageMath one.
This was the snippet to reproduce:
# intersection profiling with Python session
from sage.all import *
def run(num):
cube = (polytopes.cube() * 37 / 45).change_ring(QQ)
for i in range(num):
hspace = Polyhedron(ieqs=[[1/2, 1/(i+1), 0, 0]]).change_ring(QQ)
intersection = hspace.intersection(cube)
%timeit run(100)
The timing for SageMath kernel is `1 loop, best of 5: 186 ms per loop`, while that for Python is `1 loop, best of 5: 7.07 s per loop`. With this snippet I think no variable was cached. I also cprofiled a stand-alone .py file with the same lines (expect the `%timeit` line), whose timing is close to the Python kernel. Any idea what could cause this huge performance difference?
[Image which kernel to use](https://ibb.co/DYg6f7F)zhaiyuTue, 23 Feb 2021 23:40:29 +0100https://ask.sagemath.org/question/55851/Performance Questionhttps://ask.sagemath.org/question/50057/performance-question/ I asked my students to write a function to check the order of an element in a multiplicative group. Most wrote something like:
def naive_find_order1(g):
G = g.parent()
i = 1
while g^i != G.identity():
i += 1
return i
I told them that this was wasteful, as it computes powers in every iteration, and suggested:
def naive_find_order2(g):
G = g.parent()
i = 1
h = g
while h != G.identity():
h *= g
i += 1
return i
On the other hand, when testing both, they run in practically the same amount of time.
My question is how is that so? The second function only performs one multiplication in the group per loop, while the first I assume computes powers in every iteration. Does Sage/Python cache the powers? Any insight would be greatly appreciated.lrfinottiTue, 25 Feb 2020 15:07:17 +0100https://ask.sagemath.org/question/50057/Performance Upgrades With Python3?https://ask.sagemath.org/question/42945/performance-upgrades-with-python3/Are there any? o6pThu, 12 Jul 2018 12:07:07 +0200https://ask.sagemath.org/question/42945/Performance Issues [slow/commands not working]https://ask.sagemath.org/question/42006/performance-issues-slowcommands-not-working/I love sage and I think it's the best but I'm having some serious performance issues. Sometimes it takes like up to 15 seconds to compute trivial expressions. Most of the time, after some period of usage or invocation of specific commands (I haven't observed what causes this) the clear command becomes dysfunctional and I attach the logs:
0 [main] python2.7 14516 child_info_fork::abort: address space needed by 'eclucve31.dll' (0x400000) is already occupied
---------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-50-b74f34915750> in <module>()
----> 1 get_ipython().magic(u'clear ')
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/interactiveshell.py in magic(self, arg_s)
2156 magic_name, _, magic_arg_s = arg_s.partition(' ')
2157 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2158 return self.run_line_magic(magic_name, magic_arg_s)
2159
2160 #-------------------------------------------------------------------------
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line)
2077 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2078 with self.builtin_trap:
-> 2079 result = fn(*args,**kwargs)
2080 return result
2081
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/alias.py in __call__(self, rest)
185 cmd = '%s %s' % (cmd % tuple(args[:nargs]),' '.join(args[nargs:]))
186
--> 187 self.shell.system(cmd)
188
189 #-----------------------------------------------------------------------------
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/interactiveshell.py in system_raw(self, cmd)
2245 try:
2246 # Use env shell instead of default /bin/sh
-> 2247 ec = subprocess.call(cmd, shell=True, executable=executable)
2248 except KeyboardInterrupt:
2249 # intercept control-C; a long traceback is not useful here
/opt/sagemath-8.1/local/lib/python2.7/subprocess.py in call(*popenargs, **kwargs)
166 retcode = call(["ls", "-l"])
167 """
--> 168 return Popen(*popenargs, **kwargs).wait()
169
170
/opt/sagemath-8.1/local/lib/python2.7/subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags)
388 p2cread, p2cwrite,
389 c2pread, c2pwrite,
--> 390 errread, errwrite)
391 except Exception:
392 # Preserve original exception in case os.close raises.
/opt/sagemath-8.1/local/lib/python2.7/subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite)
915 gc.disable()
916 try:
--> 917 self.pid = os.fork()
918 except:
919 if gc_was_enabled:
OSError: [Errno 11] Resource temporarily unavailable
sage: clear
0 [main] python2.7 20456 child_info_fork::abort: address space needed by 'eclucve31.dll' (0x400000) is already occupied
---------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-51-b74f34915750> in <module>()
----> 1 get_ipython().magic(u'clear ')
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/interactiveshell.py in magic(self, arg_s)
2156 magic_name, _, magic_arg_s = arg_s.partition(' ')
2157 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2158 return self.run_line_magic(magic_name, magic_arg_s)
2159
2160 #-------------------------------------------------------------------------
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line)
2077 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2078 with self.builtin_trap:
-> 2079 result = fn(*args,**kwargs)
2080 return result
2081
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/alias.py in __call__(self, rest)
185 cmd = '%s %s' % (cmd % tuple(args[:nargs]),' '.join(args[nargs:]))
186
--> 187 self.shell.system(cmd)
188
189 #-----------------------------------------------------------------------------
/opt/sagemath-8.1/local/lib/python2.7/site-packages/IPython/core/interactiveshell.py in system_raw(self, cmd)
2245 try:
2246 # Use env shell instead of default /bin/sh
-> 2247 ec = subprocess.call(cmd, shell=True, executable=executable)
2248 except KeyboardInterrupt:
2249 # intercept control-C; a long traceback is not useful here
/opt/sagemath-8.1/local/lib/python2.7/subprocess.py in call(*popenargs, **kwargs)
166 retcode = call(["ls", "-l"])
167 """
--> 168 return Popen(*popenargs, **kwargs).wait()
169
170
/opt/sagemath-8.1/local/lib/python2.7/subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags)
388 p2cread, p2cwrite,
389 c2pread, c2pwrite,
--> 390 errread, errwrite)
391 except Exception:
392 # Preserve original exception in case os.close raises.
/opt/sagemath-8.1/local/lib/python2.7/subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite)
915 gc.disable()
916 try:
--> 917 self.pid = os.fork()
918 except:
919 if gc_was_enabled:
OSError: [Errno 11] Resource temporarily unavailable
o6pFri, 13 Apr 2018 19:18:59 +0200https://ask.sagemath.org/question/42006/Implicit Plot 3D - ThreeJShttps://ask.sagemath.org/question/41517/implicit-plot-3d-threejs/ I'm trying to plot several algebraic surfaces, most of which have singularities and other especial properties. Using `implicit_plot3d` and `viewer='threejs'` works fine for instance
var('x,y,z')
p=x^3+y^3+z^3+1-0.25*(x+y+z+1)^3
r=5
color='aquamarine'
s = implicit_plot3d(p==0, (x,-r,r), (y,-r,r), (z,-r,r), plot_points=50, region=lambda x,y,z: x**2+y**2+z**2<=r*r, color=color)
s.show(frame=False, viewer='threejs')
But when increasing `plot_points=100`, SageCell does note return any output ([demo](https://sagecell.sagemath.org/?z=eJw9jrEOwiAURfcm_Qc2aMGmtXExvtXPsEFFxdCCj6rA14saHe7JGe5wHhIZDSKKRKuycBB2PY95Ka9btM1yVbPAI0-8q3Z9WSCsyuJgjUWg8naXo0Q9KVoWngDRozP6oOfBGTv3R-YAWkFYEAsUWGWLf0s_e18HZ_U0e-jafEd11nYCI8f9UZJP2pqEul7y-EbK2ADWKMg348Pc7ht_sU92Qjkq2ErjlSAPrZ4ql84XVOrqafUCywVItg==&lang=sage)) (in a Sage Notebook, everything works fantastic). I guess the query may be too hard for SageCell...
Is there a way to optimize the code / query for SageCell to plot algebraic surfaces with acceptable quality?jepstraTue, 13 Mar 2018 11:11:37 +0100https://ask.sagemath.org/question/41517/Factorial(10000000) Performance between Ubuntu and Mac OS X on Sage 7.5.1https://ask.sagemath.org/question/36609/factorial10000000-performance-between-ubuntu-and-mac-os-x-on-sage-751/ On Ubuntu 6.10 and Core i7 Quad-Core platform running Sage 7.5.1:
I tried
time a=factorial(10000000)
It showed completing in around 3 seconds
Same command on a 2015 MacBook Pro 15in with 5th Generation Quadcore Core i7, the same factorial evaluation showed a completion time of about 5 seconds.
Same performance of 3 seconds was shown even when Ubuntu was run on a VM with same Core i7 processor.
nrsaxenaTue, 14 Feb 2017 02:46:56 +0100https://ask.sagemath.org/question/36609/Difference of performance bewteen a core i5 and core i7?https://ask.sagemath.org/question/7913/difference-of-performance-bewteen-a-core-i5-and-core-i7/We are planning to use sage for a calculus course,and got some money to buy a dedicated computer. There will be around 20 students using it simultaneously, doing mostly simple computations on it.
We have offers for an intel core i5 (4 threads) and a core i7 (8 threads), both with 16 gigs of RAM).
Would the difference in performance be worth the 100 euros of difference in the price?. Or stated in another form: how important is the number of threads of the processor for the performance of a sage server?mmarcoSun, 30 Jan 2011 07:09:37 +0100https://ask.sagemath.org/question/7913/Sage 6.7 is very slow on OSX 10.9https://ask.sagemath.org/question/28680/sage-67-is-very-slow-on-osx-109/I recently installed Sage 6.7 from source on my home computer, running OSX 10.9 (Mavericks). It is sloooooow. Just starting Sage from a terminal window takes a good two minutes; something simple like using the ? feature can take up to a minute; and serious power computation is hopeless. My computer has plenty of RAM and disk space, so I don't think that's the problem. I had the same problems with the previous version (5.something), but on other computers the performance is much better. I would be very grateful for suggestions about how to speed things up, particularly from others who have encountered this issue. Thanks!
Jeremy MartinTue, 21 Jul 2015 17:03:17 +0200https://ask.sagemath.org/question/28680/Good computer for fast computationhttps://ask.sagemath.org/question/8116/good-computer-for-fast-computation/Not sure if this is a good place to ask this, or not. But, I'm thinking of buying a new computer (PC, Windows 7) sometime in the next few months and I would like it to be pretty fast. It will be a home computer that I will use personally (I don't do anything that is computer intensive like gaming), but I would also love it to run Sage quickly if I am doing a lot of calculations. At the same time, I don't want to break the bank. I'll probably spend $1500 or less.
I don't know exactly what to ask, but I wonder if any of you know which aspects of a computer I should emphasize the most? I mean, I know I want fast processors and a lot of RAM. I may try to get an Intel i7 with 8 processors and I'll definitely get at least 8 GB of RAM. Is a graphics card important here? I have heard of graphics cards being very good at certain computations but perhaps you'd need to do programming to specifically have the graphics card do those calculations? Any thing else I should look at?
A related question, is there a way to divert resources to Sage? I'm currently searching for a graph minor (that I know doesn't exist), for example, and it looks like it's taking up 50% of the system RAM. Are there any simple ways to force it to use more resources to make it run faster?G-SageTue, 17 May 2011 15:53:44 +0200https://ask.sagemath.org/question/8116/Running Sage from other languages with high(er) performance?https://ask.sagemath.org/question/23431/running-sage-from-other-languages-with-higher-performance/I'm in the process of creating af CAS tool that uses Sage as the underlying math engine. It's a web application, so most of the code is HTML/Javascript and some underlying PHP from which I'd like to use Sage. Currently I've resorted to the `-c` command line option and executing it through PHP with `exec()`.
This approach is extremely slow (takes several seconds and CPU load is fairly high). This is - as far as I'm aware - a result of Sage having to load all its libraries each time it's run.
Is there any way I can optimize this approach? I would prefer an approach similar to the `-c` option where I can just send a command and read the printed result.
I have no issues writing a wrapper in C (or Python if neccessary), nor working with sockets (if any are available) provided it can be done as simple as the `-c` option (the WebSocket messages seen in Sage Cell seems too much trouble to do manually).
What I need to know is mostly whether or not it's even possible, if so then where to start otherwise I'd like to know if there are any alternative approaches?
**Why not use Sage Cell/Cloud?**
Several reasons:
- I don't have complete control of the UI, so I can't actually embed a Sage Cell.
- The CAS tool is part of a much larger system with other services sharing resources.
- Both are a bit overkill for the job since they include a UI and webserver - I just need to send the command and read the response.WoodgnomeWed, 16 Jul 2014 11:45:14 +0200https://ask.sagemath.org/question/23431/Which packages optimize performance parameters during build?https://ask.sagemath.org/question/10837/which-packages-optimize-performance-parameters-during-build/Which packages use performance testing during the build process to optimize it? For example, I know NTL and Atlas do some testing to pick the best parameters for the platform. What other packages do this?
If I want to rebuild the performance testing packages during system "low-load" times, how should that be done? Like this? [*Edited to fix character case*]
$ ./sage -f ntl && ./sage -f atlas ... && ./sage -b
Are there dependencies? In what order should they be rebuilt?rickhg12hsTue, 17 Dec 2013 21:14:26 +0100https://ask.sagemath.org/question/10837/Speed comparison with numerical arrays: sage, R, matlabhttps://ask.sagemath.org/question/10671/speed-comparison-with-numerical-arrays-sage-r-matlab/Dear all
I am a sage newbie coming from matlab, R, and Maple, and although I really like many features, I find that my sage code for handling numerical arrays is much more complex and much slower than the corresponding code in R and Matlab.
Here's an example:
# Simuating a sample path of standard Brownian motion
I want to simulate a random path of standard Brownian motion. The input is a vector of time points, and I want returned a (random) trajectory, sampled at these time points.
To do this I need: 1) Fast random number generation 2) Fast array operations: Diff, multiplication, and cumsum. I found that in sage I need to go through NumPy, so I wrote:
sage: import numpy
sage: # General routine to sample a Brownian motion on a time grid
sage: def rB(tvec):
sage: dt = numpy.diff(numpy.array(tvec))
sage: sdt = sqrt(dt)
sage: RNG = RealDistribution('gaussian',1)
sage: dB = sdt * numpy.array([RNG.get_random_element() for j in range(dt.__len__())],dtype=float)
sage: B1 = numpy.array([normalvariate(0,sqrt(tvec[0]))],dtype=float)
sage: B = numpy.cumsum(numpy.concatenate((B1,dB)))
sage: return(B)
I then time it, testing with 1 million time points:
sage: tvec = srange(0,1e3,1e-3)
sage: time B = rB(tvec)
I get a time of 3.5 seconds on my machine, a standard Laptop running Ubuntu, running sage from the shell. Using prun, I see that half of the time, roughly, is spent generating random numbers and the other half mainly constructing numerical arrays.
For comparison, the R code would be (here I assume that tvec[1]=0)
> tvec <- seq(0,1000,1e-3)
> system.time(B <- c(0,cumsum(rnorm(length(tvec)-1,sd=sqrt(diff(tvec))))))
which executes in 0.24 seconds. The matlab code would be
>> tvec = linspace(0,1e3,1e6+1);
>> tic;B = [0,cumsum(randn(1,length(tvec)-1) .* sqrt(diff(tvec)))];toc
which runs in 0.06 seconds.
You see that my sage code is not only clumsier, it is also between 10 and 100 times slower, so I guess I am doing something wrong.
I would really appreciate if you could point me in the direction of cleaner and faster code.
Cheers,
Uffe
Uffe_H_ThygesenMon, 28 Oct 2013 18:52:08 +0100https://ask.sagemath.org/question/10671/2D plot performancehttps://ask.sagemath.org/question/8420/2d-plot-performance/I have this function
sage: f=imag(I*(sqrt(-cos(l) + 1)*cosh(sin(1/2*l)) -
sqrt(2)*sinh(sqrt(sin(1/2*l)^2)))*sin(1/2*l)^3/((-cos(l)
+ 1)^(3/2)*e^(1/2*I*l)))
i.m.h.o. this not something terribly complicated. I wanted to plot it. So I do
sage: time plot(f,l,0,10)
..nice plot..
Time: CPU 9.28 s, Wall: 9.41 s
**I.e. I was waiting almost 10s** for this (on an intel core duo CPU P9500 @ 2.53GHz) laptop!? I thought that maybe `fast_callable` would help:
sage: ff=fast_callable(f,vars=[l],domain=CC)
sage: time plot(ff,0,10)
..same nice plot..
Time: CPU 13.24 s, Wall: 13.50 s
So that's even worse.
Now I compare to *Mathematica*
In[8]:= Timing[Plot[Im[(Sin[l/2]^3*(I*Sqrt[1 - Cos[l]]*Cosh[Sin[l/2]] -
I*Sqrt[2]*Sinh[Sqrt[Sin[l/2]^2]]))/(E^((I/2)*l)*(1 -
Cos[l])^(3/2))], {l, 0, 10}]]
Out[8]= {0.019997, ..same plot again..}
**~500 times faster** ... what am I doing wrong?XaverThu, 27 Oct 2011 16:57:58 +0200https://ask.sagemath.org/question/8420/perfomance: GAP code vs SAGE codehttps://ask.sagemath.org/question/8406/perfomance-gap-code-vs-sage-code/I have a heavy procedure in GAP. I want to speed it up.
Is it a good idea to rewrite as much as I can on SAGE and use GAP only where It is needed?
For example rewrite parts that works with lists, with iterators and with files.
Or may be GAP realization of such object is faster (because of language C) than SAGE version.
P.S. Of course I hate GAP language and I want to write on SAGE instead. But performance of calculations is very important for me. petRUShkaMon, 24 Oct 2011 10:07:05 +0200https://ask.sagemath.org/question/8406/Does SAGE support multithreading?https://ask.sagemath.org/question/8411/does-sage-support-multithreading/I have a heavy SAGE program (with a lot of GAP parts) and I noticed that SAGE used only one core. And my other cores were staying without work.
Is there a some techniques to force all my cores to work?
May be there is some python library or (the best case!) some automation... petRUShkaMon, 24 Oct 2011 10:00:31 +0200https://ask.sagemath.org/question/8411/Criteria new computerhttps://ask.sagemath.org/question/8130/criteria-new-computer/Hi, technology changes rapidly, and after a few years everyone has to make a choice about new hardware. Now it is my turn...
I could not find many guidance via Google besides some very specific questions and answers on ASKBOT, Sage-support or Sage-devel. Most related are
ask.sagemath.org/question/546/good-computer-for-fast-computation and (ask.sagemath.org/question/352/difference-of-performance-bewteen-a-core-i5-and)
What are reasonable standards today? What is nice, but not overdone? What is student level, semi-serious, and top-of-the-bill? What are criteria to buy now, or to wait for say 3-6-9 months? Is the Windows experience tool handy? Many related questions.
Based on the above suggestions:
1) Sage is single core: duo core or 4 core seems also good, isn't it?
2) 16G. Does Windows 7 + VMWARE (32bit) or Ubuntu (64bit) need it?
3) SSD is still. But does it increase speed? Value for money?
4) Are there different requirements for integer MIPS, floating point MIPS, specific applications?
5) Any other considerations, as internet speed?
The same questions, but asked differently. What is nice, but not overdone, for:
a) Some integer, floating point or symbolic computations (calculation up to 15 minutes, memory usage < 30Mb)?
b) Same as a), but calculations of 8 hours (mainly Cython) and a lot to save?
A third approach is budget related. If your budget is $300,400,500,750,1000,... What are the main issues to consider? (N.B.: Europe is more expensive). Is it just mainly Sage or it is a multipurpose machine.
Last: simple PC will do, and heavy stuff possible via sagenb / KAIST (and maybe cloud)? Or should I wait for (multi-core) Cygwin?
Many different questions, but some guidance would be appreciated. For me heavy tasks as gaming, video , photo and music are not relevant. Up till now it seems that Intel Core i7-200k 3,4Gzh 8Mb, 16 G memory with 120SSD+60SSD will do. I can use two OS. But makes it sense? Thanks in advance for your support!
(It may be an idea to have a sage page for it, because it seems to me that it is a general question relevant over time for many).
rolandWed, 25 May 2011 17:07:41 +0200https://ask.sagemath.org/question/8130/Mac OS vs Ubuntu for high performance computinghttps://ask.sagemath.org/question/8011/mac-os-vs-ubuntu-for-high-performance-computing/Hi I am considering investing in a new PC which I'll be using Sage ( and R under Sage ) a lot and will be doing computations that will be intensive at times. I am considering either Mac Snow Leopard with a new quadcore MacBook or the latest Ubuntu running on a partition on a quad core Windows 7 machine. I would like to know what differences there might be running Sage on these two platforms, with special interest toward performance. I'm just getting involved with Sage but have to make a decision about hardware and OS very soon. Any info would be much appreciated.
DougDug_the_Math_GuyFri, 18 Mar 2011 16:35:33 +0100https://ask.sagemath.org/question/8011/Time-consuming of compiled fileshttps://ask.sagemath.org/question/7861/time-consuming-of-compiled-files/I have made two files, first named "speedtest.sage", second "speedtest_compiled.spyx"
speedtest.sage:
def speedtest(n):
f = 1
for i in range(0,n):
f=17*f^2 % 1234565432134567898765456787654323456787654345678765456787654321
speedtest.spyx:
def speedtest_compiled(n):
f = 1
for i in range(0,n):
f=17*f^2 % 1234565432134567898765456787654323456787654345678765456787654321
They are equal (modulo names of the speedtest-function). I have this output on my notebook:
sage: load speedtest.sage
sage: load speedtest.spyx
Compiling speedtest.spyx...
sage: time speedtest(10^4)
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s
Wall time: 0.02 s
sage: time speedtest(10^5)
CPU times: user 0.14 s, sys: 0.02 s, total: 0.16 s
Wall time: 0.16 s
sage: time speedtest(10^6)
CPU times: user 1.39 s, sys: 0.10 s, total: 1.49 s
Wall time: 1.50 s
sage: time speedtest_compiled(10^4)
CPU times: user 0.08 s, sys: 0.00 s, total: 0.08 s
Wall time: 0.08 s
sage: time speedtest_compiled(10^5)
CPU times: user 7.87 s, sys: 0.00 s, total: 7.87 s
Wall time: 7.87 s
The interpretated Python-Code of speedtest() behaves absolutely normal, as you can see - BUT the compiled Cython-Code behaves strange, not only that it is so extremely slow, its consumed time is not even close to linear in n (notice that I could not evaluate speedtest_compiled(10^6), it took too long. What is the explanation for this - and how to compile correctly to gain the 10-1000x speedup I've read about?MustafaTue, 11 Jan 2011 08:00:50 +0100https://ask.sagemath.org/question/7861/