ASKSAGE: Sage Q&A Forum - RSS feedhttps://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Wed, 13 Dec 2023 21:28:48 +0100Where does sagemath's speed in calculating gcd come from?https://ask.sagemath.org/question/74828/where-does-sagemaths-speed-in-calculating-gcd-come-from/ Hey,
I was wondering how can sagemath can calculate gcds without calculating the numbers at stake?
For instance,
sage: from Crypto.Util.number import long_to_bytes, inverse, bytes_to_long
sage: from Crypto.Hash import SHA256
sage: e = 65537
sage: gcd(bytes_to_long(SHA256.new(data=b'first').digest())^e-35211653423, bytes_to_long(SHA256.new(data=b'second').digest())^e-156535482153)
is computed nearly instantly while it will take forever for sage to calculate `bytes_to_long(SHA256.new(data=b'first').digest())^e`.
Is it perhaps part of symbolic computation?
Thanks :)MonappWed, 13 Dec 2023 21:28:48 +0100https://ask.sagemath.org/question/74828/xsrange vs sxrange?https://ask.sagemath.org/question/73917/xsrange-vs-sxrange/Hi,
In version 9.8 Ubuntu, xsrange is faster than sxrange. Why?
RolandrolandTue, 17 Oct 2023 20:27:48 +0200https://ask.sagemath.org/question/73917/How to use all cores in solve() method?https://ask.sagemath.org/question/64672/how-to-use-all-cores-in-solve-method/I'm trying to find optimal wights of stocks that maximises return per volatility.
I'm able to solve it for two and three stocks within one second. But when I go for 4 stocks, the computer just never gives any result even in more than an our hour. Any idea on how to make it fast enough? I want to make a program for n stocks and I'm unable to go even for more than three stocks. I also tried Sympy but that is able to do for two stocks within a minute but hangs in three stocks for too long I don't have any hope with it.
I do have access to cloud computers and can make instances of 100s cores and 1000s GB RAM. I tried it on Windows server but it just gives runtime error soon enough. My own system runs indefinitely. Anyways, I suppose my code uses only one core for solving by default. All steps are pretty easy, it just hangs on the last solve() method. Is there a way I can utilise all other cores to solve it soon enough?
Here's the algorithm I'm using. r and v are the mean return and variance of the portfolio. r_i, sigma_i, sigma_ij, w_i are mean, standard deviation, covariance, and weights respectively. Equations eq1, eq2, eq3 have been derived by hand by minimising mean/standard_deviation.
r1, r2, r3, r4, s1, s2, s3, s4, s12, s13, s14, s23, s24, s34, w1, w2, w3, w4 = var('r_1, r_2, r_3, r_4, sigma_1, sigma_2, sigma_3, sigma_4, sigma_12, sigma_13, sigma_14, sigma_23, sigma_24, sigma_34, w_1, w_2, w_3, w_4')
r = r1*w1 + r2*w2 + r3*w3 + r4*(1-w1-w2-w3)
v = ((w1**2*s1**2 + w2**2*s2**2 + w3**2*s3**2 + (1-w1-w2-w3)**2*s4**2+2*s12*w1*w2 + 2*s13*w1*w3+
2*s14*(1-w2-w1-w3)*w1+s23*w2*w3 + 2*s24*w2*(1-w2-w1-w3) + 2*s34*w3*(1-w2-w1-w3)))
dr1 = derivative(r, w1)
dr2 = derivative(r, w2)
dr3 = derivative(r, w3)
dv1 = derivative(v, w1)
dv2 = derivative(v, w2)
dv3 = derivative(v, w3)
eq1 = (dv1/dr1 == 2*v/r)
eq2 = (dv2/dr2 == 2*v/r)
eq3 = (dv3/dr3 == 2*v/r)
eq4 = (w1+w2+w3+w4 == 1)
sol = solve((eq1, eq2, eq3, eq4), (w1, w2,w3, w4))
show(sol)anmolspaceFri, 28 Oct 2022 18:15:57 +0200https://ask.sagemath.org/question/64672/Is there a more efficient way to compute the first digit of a number?https://ask.sagemath.org/question/30628/is-there-a-more-efficient-way-to-compute-the-first-digit-of-a-number/I need to compute the first digit of some large numbers. So far I've been converting them to strings, although it can be somewhat slow. For example:
sage: %timeit str(2^57885161)[0]
1 loops, best of 3: 3.07 s per loop
Is there a faster way to to do this? For my purpose you can assume that all the numbers involved are powers of some small base.A.P.Wed, 11 Nov 2015 11:10:44 +0100https://ask.sagemath.org/question/30628/Sage 6.7 is very slow on OSX 10.9https://ask.sagemath.org/question/28680/sage-67-is-very-slow-on-osx-109/I recently installed Sage 6.7 from source on my home computer, running OSX 10.9 (Mavericks). It is sloooooow. Just starting Sage from a terminal window takes a good two minutes; something simple like using the ? feature can take up to a minute; and serious power computation is hopeless. My computer has plenty of RAM and disk space, so I don't think that's the problem. I had the same problems with the previous version (5.something), but on other computers the performance is much better. I would be very grateful for suggestions about how to speed things up, particularly from others who have encountered this issue. Thanks!
Jeremy MartinTue, 21 Jul 2015 17:03:17 +0200https://ask.sagemath.org/question/28680/Fastest way to call special function (elliptic integral) from cython code for gsl ode_solver()https://ask.sagemath.org/question/11058/fastest-way-to-call-special-function-elliptic-integral-from-cython-code-for-gsl-ode_solver/I am using **sage.gsl.ode.ode_solver** to solve an ode, and overloading the **c_f()** function as detailed [here](http://www.sagemath.org/doc/reference/calculus/sage/gsl/ode.html).
The code is working using an approximate **c_f()** function. For the exact one, I need to use the complete elliptic integrals. These are special functions that are not part of the libc.math library. They seem to be available through a couple packages on sage. I found I could call them by using **sage.all.elliptic_ec(k)** and **sage.all.elliptic_kc(k)**
This is causing the code to slow down by a factor of about 10^3. I know those special functions are going to be expensive, but timing the elliptic_ec() function in a separate cell returns 0 for the time, so these functions aren't *that* slow.
I'm wondering if the problem is mostly just that I'm calling a python function from cython and losing the speed up because of that? Is there a better way to do that? [There are](http://https://www.gnu.org/software/gsl/manual/html_node/Legendre-Form-of-Complete-Elliptic-Integrals.html#Legendre-Form-of-Complete-Elliptic-Integrals) c libraries available with those functions - would it be better to import the files into my project, and call them from the cython code? (not sure how to do that...) Sage seems to have some gsl packages already - does it also have those special functions through gsl? Is there a different package with a faster form of those functions?
Disclaimer: I've played around with sage a bit in the past, but this is the first intensive numerical simulation I've attempted with it, so sage/python/cython are all relatively new to me. I'm more accustomed to Mathematica, Matlab/Octave and c++
Here are (I believe) the relevant bits of code:
%cython
from libc.math cimport pow
cimport sage.gsl.ode
import sage.gsl.ode
include 'gsl.pxi'
cdef class zeeman_acceleration(sage.gsl.ode.ode_system):
... other stuff ...
cdef double coilFieldR(self, double rp, double zp, double coilR): # T
cdef double B0, alpha, beta, Q, k
... some basic arithmetic ...
if rp > coilR/10e4 :
gamma = zp/rp
EC = sage.all.elliptic_ec(k)
EK = sage.all.elliptic_kc(k)
Br = B0*gamma/(self.pi*sqrt(Q))*(EC*(1+alpha**2+beta**2)/(Q-4*alpha)-EK)
return Br
return 0
... other stuff (including c_f() function that calls coilField...
argentum2fFri, 27 Jun 2014 12:55:47 +0200https://ask.sagemath.org/question/11058/Fastest way of running sage?https://ask.sagemath.org/question/10525/fastest-way-of-running-sage/Hi,
So far I have used 3 different "versions" of sage:
1. the online sage notebook: http://www.sagenb.org/
2. the sage notebook that I downloaded and run on Windows 7 thru VirtualBox on an intel core 2duo
3. the sage in the cloud found here: https://cloud.sagemath.com/
Using "time a = factorial(1000000)" I get cpu and wall times of(.41s, .41s) for 1 and 2,13s; 9.53s) for 2. For 3 "time" apparently does not work but it seems slower than 1.
Questions:
Why is 1 significantly faster than 2?
How can I test 3?
Is there a way to test entire programs that gives the time for every step.
What is the fastest way to run sage?
Thanks!
Alexandru PapiuSun, 08 Sep 2013 18:28:34 +0200https://ask.sagemath.org/question/10525/speed up execution time of script (Cython or other...)https://ask.sagemath.org/question/10407/speed-up-execution-time-of-script-cython-or-other/Hi experts!
I have a TOSHIBA laptop with next specifications:
4x Intel core i3 CPU M350 2.27GHz
3gb RAM
I write a script of Monte Carlo simulation that work fine, but i would speed up it.
How can I speed up the execution time of my algorithm?
I heard about Cython, but I dont know what it is and if that works for my goal (speed up algorithm)
Please explain step by step (im a totaly newby...)
Waiting for your answers.
Thanks a lot.
mresimulatorSat, 03 Aug 2013 10:46:18 +0200https://ask.sagemath.org/question/10407/speed and order of operations with CDFhttps://ask.sagemath.org/question/9545/speed-and-order-of-operations-with-cdf/Why is the absolute value of the exponential of z:
f = fast_callable(exp(z).abs(),domain=CDF,vars='z')
about twice as fast as the exponential of the real part of z:
g = fast_callable(exp(z.real()), domain=CDF, vars='z')
Should I ignore this kind of thing in sage, or is there a good reason in this particular case?
***
**Data:**
z = var('z')
f = fast_callable(exp(z).abs(),domain=CDF,vars='z')
g = fast_callable(exp(z.real()), domain=CDF, vars='z')
timeit('f(4+2*I)')
625 loops, best of 3: 2.94 µs per loop
timeit('g(4+2*I)')
625 loops, best of 3: 5.87 µs per loop
***
**Non-fast_callable times, in case you are interested:**
z = var('z')
fs(z) = exp(z).abs()
gs(z) = exp(z.real())
timeit('fs(4+2*I)')
625 loops, best of 3: 1.02 ms per loop
timeit('gs(4+2*I)')
625 loops, best of 3: 988 µs per loopmarcoSun, 18 Nov 2012 19:37:08 +0100https://ask.sagemath.org/question/9545/Possible speed improvement for interactive complex plothttps://ask.sagemath.org/question/9544/possible-speed-improvement-for-interactive-complex-plot/The following is a simple function which plots the real part of a complex polynomial f, the zeros of its derivative, and implicitly defined curves passing through these zeros. The whole function only depends on a parameter Z in the original polynomial f.
I would like to know if the speed of this can be dramatically improved (even at the expense of accuracy) so that I can make it interactive, so that I can vary the parameter Z. It seems to me that it should be possible to make this almost instantly responsive.
I = CDF.0 # Define i as complex double
Z = 1 + (-0.1)*I # Here is the complex number parameter
var('z')
f = lambda z: (z^3)/3 - Z*z # A complex polynomial
P=complex_plot(lambda z: f(z).real(), (-5,5), (-5,5), plot_points=200) # Plot the real part
Roots=derivative(f(z),z).roots(ring=CDF) # Find the roots of derivative(f)
R = [(Roots[i][0].real(),Roots[i][0].imag()) for i in range(len(Roots))]
F(x,y) = f(x+I*y)
COL=["black", "blue", "purple", "green", "orange"]
C=[implicit_plot((F-F(R[n][0],R[n][1])).imag(), (-5, 5), (-5, 5), color=COL[n]) for n in range(len(R))]
P+point(R,zorder=40) + point((Z.real(),Z.imag()),zorder=40)+ sum(C) # Plot the real part of f, and the roots, and the parameter, and all the implicit curves given by imaginary(f) = imaginary(root)
This function ran in about 13 seconds on my (recent) imac and outputs
image:![example](https://dl.dropbox.com/u/4747602/sage0.png)
marcoSun, 18 Nov 2012 00:06:48 +0100https://ask.sagemath.org/question/9544/Speeding up matrix multiplication?https://ask.sagemath.org/question/8438/speeding-up-matrix-multiplication/I'm currently trying write code to compute with overconvergent modular symbols. In iterating a Hecke operator, the key (i.e. most time consuming) operation that is performed tons of times is simply taking the product of a large dense matrix say $M$ with a vector $v$, both with integral entries.
More precisely, let $p$ be a (relatively small) prime (think $p=11$) and $N$ some integer (think 100). I have an $N$ by $N$ matrix and am interested in quickly computing the product $M \cdot v$ modulo $p^N$.
I am simply using the intrinsic SAGE command of multiplying a matrix by a vector, and I was surprised to see that working with matrices over ${\bf Z}/p^n{\bf Z}$ was much (i.e. 10 times) slower than working with matrices over ${\bf Z}$.
My question: is there a faster way to do this computation than using SAGE's intrinsic matrix times a vector command over ${\bf Z}$?
Robert PollackFri, 04 Nov 2011 17:46:03 +0100https://ask.sagemath.org/question/8438/perfomance: GAP code vs SAGE codehttps://ask.sagemath.org/question/8406/perfomance-gap-code-vs-sage-code/I have a heavy procedure in GAP. I want to speed it up.
Is it a good idea to rewrite as much as I can on SAGE and use GAP only where It is needed?
For example rewrite parts that works with lists, with iterators and with files.
Or may be GAP realization of such object is faster (because of language C) than SAGE version.
P.S. Of course I hate GAP language and I want to write on SAGE instead. But performance of calculations is very important for me. petRUShkaMon, 24 Oct 2011 10:07:05 +0200https://ask.sagemath.org/question/8406/Does SAGE support multithreading?https://ask.sagemath.org/question/8411/does-sage-support-multithreading/I have a heavy SAGE program (with a lot of GAP parts) and I noticed that SAGE used only one core. And my other cores were staying without work.
Is there a some techniques to force all my cores to work?
May be there is some python library or (the best case!) some automation... petRUShkaMon, 24 Oct 2011 10:00:31 +0200https://ask.sagemath.org/question/8411/