ASKSAGE: Sage Q&A Forum - Latest question feedhttps://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Wed, 11 Nov 2015 04:10:44 -0600Is there a more efficient way to compute the first digit of a number?https://ask.sagemath.org/question/30628/is-there-a-more-efficient-way-to-compute-the-first-digit-of-a-number/I need to compute the first digit of some large numbers. So far I've been converting them to strings, although it can be somewhat slow. For example:
sage: %timeit str(2^57885161)[0]
1 loops, best of 3: 3.07 s per loop
Is there a faster way to to do this? For my purpose you can assume that all the numbers involved are powers of some small base.A.P.Wed, 11 Nov 2015 04:10:44 -0600https://ask.sagemath.org/question/30628/Sage 6.7 is very slow on OSX 10.9https://ask.sagemath.org/question/28680/sage-67-is-very-slow-on-osx-109/I recently installed Sage 6.7 from source on my home computer, running OSX 10.9 (Mavericks). It is sloooooow. Just starting Sage from a terminal window takes a good two minutes; something simple like using the ? feature can take up to a minute; and serious power computation is hopeless. My computer has plenty of RAM and disk space, so I don't think that's the problem. I had the same problems with the previous version (5.something), but on other computers the performance is much better. I would be very grateful for suggestions about how to speed things up, particularly from others who have encountered this issue. Thanks!
Jeremy MartinTue, 21 Jul 2015 10:03:17 -0500https://ask.sagemath.org/question/28680/Fastest way of running sage?https://ask.sagemath.org/question/10525/fastest-way-of-running-sage/Hi,
So far I have used 3 different "versions" of sage:
1. the online sage notebook: http://www.sagenb.org/
2. the sage notebook that I downloaded and run on Windows 7 thru VirtualBox on an intel core 2duo
3. the sage in the cloud found here: https://cloud.sagemath.com/
Using "time a = factorial(1000000)" I get cpu and wall times of(.41s, .41s) for 1 and 2,13s; 9.53s) for 2. For 3 "time" apparently does not work but it seems slower than 1.
Questions:
Why is 1 significantly faster than 2?
How can I test 3?
Is there a way to test entire programs that gives the time for every step.
What is the fastest way to run sage?
Thanks!
Alexandru PapiuSun, 08 Sep 2013 11:28:34 -0500https://ask.sagemath.org/question/10525/speed up execution time of script (Cython or other...)https://ask.sagemath.org/question/10407/speed-up-execution-time-of-script-cython-or-other/Hi experts!
I have a TOSHIBA laptop with next specifications:
4x Intel core i3 CPU M350 2.27GHz
3gb RAM
I write a script of Monte Carlo simulation that work fine, but i would speed up it.
How can I speed up the execution time of my algorithm?
I heard about Cython, but I dont know what it is and if that works for my goal (speed up algorithm)
Please explain step by step (im a totaly newby...)
Waiting for your answers.
Thanks a lot.
mresimulatorSat, 03 Aug 2013 03:46:18 -0500https://ask.sagemath.org/question/10407/speed and order of operations with CDFhttps://ask.sagemath.org/question/9545/speed-and-order-of-operations-with-cdf/Why is the absolute value of the exponential of z:
f = fast_callable(exp(z).abs(),domain=CDF,vars='z')
about twice as fast as the exponential of the real part of z:
g = fast_callable(exp(z.real()), domain=CDF, vars='z')
Should I ignore this kind of thing in sage, or is there a good reason in this particular case?
***
**Data:**
z = var('z')
f = fast_callable(exp(z).abs(),domain=CDF,vars='z')
g = fast_callable(exp(z.real()), domain=CDF, vars='z')
timeit('f(4+2*I)')
625 loops, best of 3: 2.94 µs per loop
timeit('g(4+2*I)')
625 loops, best of 3: 5.87 µs per loop
***
**Non-fast_callable times, in case you are interested:**
z = var('z')
fs(z) = exp(z).abs()
gs(z) = exp(z.real())
timeit('fs(4+2*I)')
625 loops, best of 3: 1.02 ms per loop
timeit('gs(4+2*I)')
625 loops, best of 3: 988 µs per loopmarcoSun, 18 Nov 2012 12:37:08 -0600https://ask.sagemath.org/question/9545/Possible speed improvement for interactive complex plothttps://ask.sagemath.org/question/9544/possible-speed-improvement-for-interactive-complex-plot/The following is a simple function which plots the real part of a complex polynomial f, the zeros of its derivative, and implicitly defined curves passing through these zeros. The whole function only depends on a parameter Z in the original polynomial f.
I would like to know if the speed of this can be dramatically improved (even at the expense of accuracy) so that I can make it interactive, so that I can vary the parameter Z. It seems to me that it should be possible to make this almost instantly responsive.
I = CDF.0 # Define i as complex double
Z = 1 + (-0.1)*I # Here is the complex number parameter
var('z')
f = lambda z: (z^3)/3 - Z*z # A complex polynomial
P=complex_plot(lambda z: f(z).real(), (-5,5), (-5,5), plot_points=200) # Plot the real part
Roots=derivative(f(z),z).roots(ring=CDF) # Find the roots of derivative(f)
R = [(Roots[i][0].real(),Roots[i][0].imag()) for i in range(len(Roots))]
F(x,y) = f(x+I*y)
COL=["black", "blue", "purple", "green", "orange"]
C=[implicit_plot((F-F(R[n][0],R[n][1])).imag(), (-5, 5), (-5, 5), color=COL[n]) for n in range(len(R))]
P+point(R,zorder=40) + point((Z.real(),Z.imag()),zorder=40)+ sum(C) # Plot the real part of f, and the roots, and the parameter, and all the implicit curves given by imaginary(f) = imaginary(root)
This function ran in about 13 seconds on my (recent) imac and outputs
image:![example](https://dl.dropbox.com/u/4747602/sage0.png)
marcoSat, 17 Nov 2012 17:06:48 -0600https://ask.sagemath.org/question/9544/Speeding up matrix multiplication?https://ask.sagemath.org/question/8438/speeding-up-matrix-multiplication/I'm currently trying write code to compute with overconvergent modular symbols. In iterating a Hecke operator, the key (i.e. most time consuming) operation that is performed tons of times is simply taking the product of a large dense matrix say $M$ with a vector $v$, both with integral entries.
More precisely, let $p$ be a (relatively small) prime (think $p=11$) and $N$ some integer (think 100). I have an $N$ by $N$ matrix and am interested in quickly computing the product $M \cdot v$ modulo $p^N$.
I am simply using the intrinsic SAGE command of multiplying a matrix by a vector, and I was surprised to see that working with matrices over ${\bf Z}/p^n{\bf Z}$ was much (i.e. 10 times) slower than working with matrices over ${\bf Z}$.
My question: is there a faster way to do this computation than using SAGE's intrinsic matrix times a vector command over ${\bf Z}$?
Robert PollackFri, 04 Nov 2011 11:46:03 -0500https://ask.sagemath.org/question/8438/perfomance: GAP code vs SAGE codehttps://ask.sagemath.org/question/8406/perfomance-gap-code-vs-sage-code/I have a heavy procedure in GAP. I want to speed it up.
Is it a good idea to rewrite as much as I can on SAGE and use GAP only where It is needed?
For example rewrite parts that works with lists, with iterators and with files.
Or may be GAP realization of such object is faster (because of language C) than SAGE version.
P.S. Of course I hate GAP language and I want to write on SAGE instead. But performance of calculations is very important for me. petRUShkaMon, 24 Oct 2011 03:07:05 -0500https://ask.sagemath.org/question/8406/Does SAGE support multithreading?https://ask.sagemath.org/question/8411/does-sage-support-multithreading/I have a heavy SAGE program (with a lot of GAP parts) and I noticed that SAGE used only one core. And my other cores were staying without work.
Is there a some techniques to force all my cores to work?
May be there is some python library or (the best case!) some automation... petRUShkaMon, 24 Oct 2011 03:00:31 -0500https://ask.sagemath.org/question/8411/