ASKSAGE: Sage Q&A Forum - Latest question feedhttp://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Mon, 23 Mar 2020 14:22:35 -0500can one now set a timelimit on operation in sagemath?http://ask.sagemath.org/question/50331/can-one-now-set-a-timelimit-on-operation-in-sagemath/My question here is this: Can now (version 9) tell sagemath to set a timeout on some call and have sagemath issue an error (may be exception) if timeout expires and the call have not completed yet? I am mainly interested in `integrate` calls, which some take long time.
Here is an example of how this is done in [Maple](https://www.maplesoft.com/support/help/Maple/view.aspx?path=timelimit) **limits the amount of CPU time spent on a computation**
restart;
integrand:=(b*x + a)^(3/2)*(d*x + c)^(5/2)/x^7;
try
timelimit(300,int(integrand,x));
print("Finished before time out, good");
catch:
print("opps, timed out");
end try;
Is it possible to do the above directly in sagemath without me having to program the timelimit myself using Process and Queues? I am using sagemath 9 on Linux.
**update**
I am getting stack dump when implementing the alarm method shown in the answer below. I am not sure why that is. May be I am making an error somewhere. Below is a MWE to reproduce it.
I created a file `build_fricas_new_timeout_one_integral.sage` with content
#!/usr/bin/env sage
from sage.all import *
from cysignals.alarm import alarm, AlarmInterrupt, cancel_alarm
var('x a b')
def doTheIntegration():
integrand = tan(x)/(a^3+b^3*tan(x)^2)^(1/3)
fricas.setSimplifyDenomsFlag(fricas.true)
anti=integrate(integrand,x,algorithm="fricas")
return anti
try:
alarm(20)
anti = doTheIntegration()
except AlarmInterrupt:
print("Timed out")
else:
print("Completed OK, anti=",anti)
cancel_alarm()
Then called it as follows `sage ./build_fricas_new_timeout_one_integral.sage` then I get this on the screen
>sage ./build_fricas_new_timeout_one_integral.sage
Interrupting FriCAS...
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sage/interfaces/expect.py", line 986, in _eval_line
E.expect(self._prompt)
File "/usr/lib/python3.8/site-packages/pexpect/spawnbase.py", line 343, in expect
return self.expect_list(compiled_pattern_list,
File "/usr/lib/python3.8/site-packages/pexpect/spawnbase.py", line 372, in expect_list
return exp.expect_loop(timeout)
File "/usr/lib/python3.8/site-packages/pexpect/expect.py", line 169, in expect_loop
incoming = spawn.read_nonblocking(spawn.maxread, timeout)
File "/usr/lib/python3.8/site-packages/pexpect/pty_spawn.py", line 500, in read_nonblocking
if (timeout != 0) and select(timeout):
File "/usr/lib/python3.8/site-packages/pexpect/pty_spawn.py", line 450, in select
return select_ignore_interrupts([self.child_fd], [], [], timeout)[0]
File "/usr/lib/python3.8/site-packages/pexpect/utils.py", line 143, in select_ignore_interrupts
return select.select(iwtd, owtd, ewtd, timeout)
File "src/cysignals/signals.pyx", line 320, in cysignals.signals.python_check_interrupt
cysignals.signals.AlarmInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./build_fricas_new_timeout_one_integral.sage.py", line 22, in <module>
anti = doTheIntegration()
File "./build_fricas_new_timeout_one_integral.sage.py", line 17, in doTheIntegration
anti=integrate(integrand,x,algorithm="fricas")
File "/usr/lib/python3.8/site-packages/sage/misc/functional.py", line 753, in integral
return x.integral(*args, **kwds)
File "sage/symbolic/expression.pyx", line 12391, in sage.symbolic.expression.Expression.integral (build/cythonized/sage/symbolic/expression.cpp:64575)
File "/usr/lib/python3.8/site-packages/sage/symbolic/integration/integral.py", line 927, in integrate
return integrator(expression, v, a, b)
File "/usr/lib/python3.8/site-packages/sage/symbolic/integration/external.py", line 386, in fricas_integrator
result = ex.integrate(v)
File "/usr/lib/python3.8/site-packages/sage/interfaces/interface.py", line 680, in __call__
return self._obj.parent().function_call(self._name, [self._obj] + list(args), kwds)
File "/usr/lib/python3.8/site-packages/sage/interfaces/interface.py", line 601, in function_call
return self.new(s)
File "/usr/lib/python3.8/site-packages/sage/interfaces/interface.py", line 370, in new
return self(code)
File "/usr/lib/python3.8/site-packages/sage/interfaces/interface.py", line 296, in __call__
return cls(self, x, name=name)
File "/usr/lib/python3.8/site-packages/sage/interfaces/expect.py", line 1471, in __init__
self._name = parent._create(value, name=name)
File "/usr/lib/python3.8/site-packages/sage/interfaces/interface.py", line 501, in _create
self.set(name, value)
File "/usr/lib/python3.8/site-packages/sage/interfaces/fricas.py", line 589, in set
output = self.eval(cmd, reformat=False)
File "/usr/lib/python3.8/site-packages/sage/interfaces/fricas.py", line 847, in eval
output = Expect.eval(self, code, strip=strip,
File "/usr/lib/python3.8/site-packages/sage/interfaces/expect.py", line 1384, in eval
return '\n'.join([self._eval_line(L, allow_use_file=allow_use_file, **kwds)
File "/usr/lib/python3.8/site-packages/sage/interfaces/expect.py", line 1384, in <listcomp>
return '\n'.join([self._eval_line(L, allow_use_file=allow_use_file, **kwds)
File "/usr/lib/python3.8/site-packages/sage/interfaces/expect.py", line 1017, in _eval_line
self._keyboard_interrupt()
File "/usr/lib/python3.8/site-packages/sage/interfaces/expect.py", line 1039, in _keyboard_interrupt
raise KeyboardInterrupt("Ctrl-c pressed while running %s" % self)
KeyboardInterrupt: Ctrl-c pressed while running FriCAS
>
Am I doing something wrong in using the alarm method?NasserMon, 23 Mar 2020 14:22:35 -0500http://ask.sagemath.org/question/50331/.is_galois Computationhttp://ask.sagemath.org/question/49620/is_galois-computation/Given an irreducible polynomial $f$, Sage computes whether a given field $K= \mathbb{Q}(f)$ is Galois with $K$.is_galois. This works well if $f$ is of low degree, say 1-20. But when $f$ is large, say degree 100 or more, this is very time consuming.
For $K$ to be Galois, it must have the same degree as $f$ and because we would expect (at random) $f$ to have Galois group $S_n$, $\text{Gal}(K/\mathbb{Q})$ will be very large. So in theory, determining 'Is Galois Y/N' should run much faster than actually computing the Galois group - which is very hard.
How does Sage .is_galois work? Does it try to compute the Galois group and compare sizes, or does it use some other method? If in computing $\text{Gal}(K/\mathbb{Q})$ you find a group with size at least $> \deg f$, does it automatically stop and give 'False'? If not, is there a way to force such a feature using features already built into Sage?nmbthrTue, 21 Jan 2020 09:04:23 -0600http://ask.sagemath.org/question/49620/What is the time complexity of rewriting polynomials?http://ask.sagemath.org/question/47777/what-is-the-time-complexity-of-rewriting-polynomials/ What is the time complexity of rewriting polynomials (simplifying them) over a 2-valued field? What properties of the initial expression influence the time it takes for Sage to come up with the simplest possible expression?JP_sagemathFri, 06 Sep 2019 08:51:03 -0500http://ask.sagemath.org/question/47777/Time complexity of rewriting systemshttp://ask.sagemath.org/question/47776/time-complexity-of-rewriting-systems/ What is the time complexity of rewriting polynomials (simplifying them) over a 2-valued field? What properties of the initial expression influence the time it takes for Sage to come up with the simplest possible expression?JP_sagemathFri, 06 Sep 2019 08:49:29 -0500http://ask.sagemath.org/question/47776/Is there any way to have a cell tell you how long it took to run in the Jupyter notebook?http://ask.sagemath.org/question/47269/is-there-any-way-to-have-a-cell-tell-you-how-long-it-took-to-run-in-the-jupyter-notebook/ The question is basically all in the title. I'm working on a project in the Jupyter and notebook, and I have some cells that have longer run times. Sometimes I let these run, and come back to check on them later, however it would be nice to know exactly what duration those cells ran for.sum8tionFri, 26 Jul 2019 16:31:17 -0500http://ask.sagemath.org/question/47269/Why is exponentiation of points on elliptic curve so fast?http://ask.sagemath.org/question/46705/why-is-exponentiation-of-points-on-elliptic-curve-so-fast/I am working on elliptic curves in sagemath. I was trying to collect benchmarks for group operation and exponentiation of points on NIST P-256 elliptic curve. When I tried to perform a group operation on 2 points on the curve, it takes roughly 2 micro seconds. When I tried to perform exponentiation on a point in elliptic curve with a random exponent, it takes only 3 micro seconds. How is this even possible? Since I am exponentiating with a 256 bit value, this should at least take time required for 256 group operations, which is more than 1ms. I am worried if my code is wrong!
p = 115792089210356248762697446949407573530086143415290314195533631308867097853951
order = 115792089210356248762697446949407573529996955224135760342422259061068512044369
b = 0x5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b
F = GF(p)
E = EllipticCurve(F, [-3,b])
runs = 10000
G = E.abelian_group()
F2 = GF(order)
exponent = [F2.random_element() for i in range(runs)]
t1 = time()
for i in range(runs):
e = Integer(exponent[i])*e2[i]
t2 = time()
print "Time per operation = ", (t2 - t1)/runs , " seconds"
e1 = [G.random_element() for i in range(runs)]
e2 = [G.random_element() for i in range(runs)]
t1 = time()
for i in range(runs):
e = e1[i]+e2[i]
t2 = time()
print "Time per operation = ", (t2 - t1)/runs , " seconds"pantherWed, 29 May 2019 22:50:46 -0500http://ask.sagemath.org/question/46705/Quicker expansion of multivariate polynomialshttp://ask.sagemath.org/question/46098/quicker-expansion-of-multivariate-polynomials/ Hi everyone,
I'm working on a project that involves first generating polynomials, then checking to see if certain terms exist within them. Essentially, I have a function `build_polynomial` that builds polynomials in un-expanded form according to a certain set of specifications. The polynomials produced by this function usually look something like this:
(x1 - x2 - x3 + y1 - y2 + y3 - y4)*(x1 - x2)*(x1 - x3)*(x1 + y1 - y2)*(x2 + x3 - y1 + y2 - y3 + y4)*(x2 + x3 + y2 - y3)*(x2 + x3 - y3 + y4)*(x2 + x3)*(x2 - x3)*(x2 + y2 - y3)*(y1 - y2)*(y1 - y3)*(y1 - y4)*(y2 - y3)*(y2 - y4)*(y3 - y4)
This is a particularly small example, the polynomials expand quite rapidly as the problem grows in size, but this example should give an idea of generally what they look like.
Next, my program must check too see if specific terms exist in the expanded form of the polynomial. Currently, I do this by first expanding the polynomial with the Sage `expand` function, then using the `coefficient` function with the specific terms as arguments.
My issue is that as the polynomial grows, the completion time of the `expand` function also grows quite rapidly. For example, at medium-to-large complexity, the polynomial can take days to be expanded.
Is there any way around this? Possibly a more efficient function that anybody knows of? I know that Sage is not really the best option for large computations like this, but I'm looking for a way to at least slightly improve performance while I research a longer-term solution.
Thanks!trenzafeedsThu, 11 Apr 2019 12:41:03 -0500http://ask.sagemath.org/question/46098/Virtual machine running faster than source build. Why?http://ask.sagemath.org/question/33932/virtual-machine-running-faster-than-source-build-why/ Hi all, a fairly general question. I've installed sage both on a virtual machine and from source. I would expct that the source build would be more powerful as a virtual machine can only use some but not all of the system's power. When implementing a primality testing algorith (3000 digit primes), the virtual machine is significantly faster and is so on other large computations. Can anyone give a reason for this? I'm running it on El capitan (mac), core i5 intel processor with 12GB RAM. Additionally, the amount of RAM being consumed is nowhere near the full amount available. PatSun, 26 Jun 2016 13:31:00 -0500http://ask.sagemath.org/question/33932/Execution time vs. Computation time in MILPhttp://ask.sagemath.org/question/30495/execution-time-vs-computation-time-in-milp/ Hi,
I am using GLPK in Sage 6.9 to solve MILP problem. I am also using the `Solver_parameter('timelimit')` to terminate the problem after the specficied time. According to [SageMath documentation](http://doc.sagemath.org/html/en/reference/numerical/sage/numerical/mip.html), `"timelimit" defines the maximum time spent on a computation. Measured in seconds.` However, I am a bit confused with this definition. This is because:
I am solving `the same problem` with `the same timelimit value` for `different input values (generated randomly using uniform distribution)` while I was printing `execution time` of each. I have noticed `different execution times` when I am giving `different inputs` (ranging from 1min, 5mins, to even 10mins).
Can some one please tell me how the `computation time` is calculated for timelimit? Why they are different even though I used the same problem with the same timelimit? Is it something related to the size of the input, as its only difference between them? Is there any relation between `computation` and `execution` times?
Cheers,
Aissan
Aissan DalvandiThu, 05 Nov 2015 20:24:44 -0600http://ask.sagemath.org/question/30495/