ASKSAGE: Sage Q&A Forum - Latest question feedhttp://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Fri, 18 May 2018 19:26:56 -0500Unexpected behaviour of arbitrary precision real numbershttp://ask.sagemath.org/question/42369/unexpected-behaviour-of-arbitrary-precision-real-numbers/I've been trying to use arbitrary precision real numbers and I'm a bit confused. The simple snippet below illustrates my point:
R = RealField(200)
z = R(2.0)
print '%.20f' % (z.sqrt())
I should get (or that's what I was expecting) 1.41421356237309504880, but instead I got 1.41421356237309514547 (a disagreement in the last five decimal places). Where am I goofing up? You see, it's a quite simple code. How could I fix it in order to obtain the desired/expected result?
Thanks in advance for any light shed on this matter.
FaustofbarbutoFri, 18 May 2018 19:26:56 -0500http://ask.sagemath.org/question/42369/Is it possible to use Arb-library commands directly in Sagemath?http://ask.sagemath.org/question/41811/is-it-possible-to-use-arb-library-commands-directly-in-sagemath/ I would like to use some commands/functions from the high performance/precision [Arb-library](http://arblib.org) that in turn builds on the Flint-library. Searching the web, I did find references that some commands from both libraries already underpin directly or indirectly the Sagemath environment, however basic commands like for instance:
arb.pi()
don't seem to be supported (or better, I couldn't get them to work yet).
What I am after is to use Flint and Arb functions within Python like in [this example](http://fredrikj.net/blog/2015/01/arb-and-flint-in-python/), but then under the Sagemath umbrella. Is this already possible?
Thanks.RuudHWed, 28 Mar 2018 17:43:30 -0500http://ask.sagemath.org/question/41811/Sage 8.1 eats memory until system freezehttp://ask.sagemath.org/question/41009/sage-81-eats-memory-until-system-freeze/Hi, I have a quite long code (which uses arbitrary precision real numbers) which runs perfectly on Sage 7.5.1 (ppa for Mint 17.3 - Ubuntu 14.04). On Sage 8.1 (sage-8.1-Ubuntu_14.04-x86_64.tar.bz2) it starts to eat the memory until it freezes the system. I would like to help in debugging. Is there something that I can try/run/test? I can also upload the code, if necessary.
Finally, I have a minimal working code
def test(m,c,precision):
M = 3*m
RRR = RealField(prec = precision)
coef02 = [RRR(1/i) for i in [1..M+1]]
g = coef02[M]
for i in [M-1..2,step=-1]: # Horner
g = x*g+coef02[i]
ME = 32
disk = [exp (2*pi.n(precision)*I*i/ME) for i in range(ME)]
gamma = abs(c)/2
ellipse = [(gamma*(w+c^2/(4*gamma^2)/w)) for w in disk]
epsilon1 = max([abs(g(x=z)) for z in ellipse])
return
m = 40
for c in [1/2..10,step=1/2]:
for ell in [1..10]:
test(m,c,165)
If I run this in 7.5.1, I see (in top) the memory percentage stable around 2.5. If I run in 8.1, it grows up to 7.3 before code termination. If I increase the length of the loops, memory usage continues to grow.Marco CaliariWed, 07 Feb 2018 04:16:09 -0600http://ask.sagemath.org/question/41009/Newton method for one variablehttp://ask.sagemath.org/question/34554/newton-method-for-one-variable/ I try to use 'solve' for nonlinear eauqtions in one variable, but the answer is tautological or "cannot evaluate symbolic expression numerically" if I add "explicit_solutions=True". Is Newton method (or any other, like secant method) implemented in Sage?
Ex:
sage: x=var('x')
sage: (x-cos(x)).solve(x)
[x == cos(x)]
while I would expect x= 0.739085logomathMon, 22 Aug 2016 02:53:18 -0500http://ask.sagemath.org/question/34554/Incorrect result for comparison (precision issues?)http://ask.sagemath.org/question/32371/incorrect-result-for-comparison-precision-issues/ Consider this session:
sage: if log(10) * 248510777753 < log(1024) * 82553493450:
....: print 'Less'
....: else:
....: print 'Not less'
....:
Not less
or more simply:
sage: bool(log(10)*248510777753 < log(1024)*82553493450)
False
But this is wrong, as we can see with higher-precision arithmetic:
sage: import mpmath
sage: mpmath.mp.dps = 22 # or anything greater
sage: bool(mpmath.log(10)*248510777753 < mpmath.log(1024)*82553493450)
True
I guess this is happening because Sage is computing to some finite precision. But when writing some bigger program, it's scary that a condition involving variables, like say,
if m * q < n * p:
can without warning give the wrong result and take the wrong path. Is there a way to prevent this from happening, i.e. to make sure that in the program, comparisons are done using as many bits of precision as are necessary to evaluate them correctly, without us having to pre-specify a precision (which may be both too large and wasteful, or too small and give incorrect results)?ShreevatsaRThu, 28 Jan 2016 21:58:13 -0600http://ask.sagemath.org/question/32371/taylor expansion with arbritary precision numbershttp://ask.sagemath.org/question/23370/taylor-expansion-with-arbritary-precision-numbers/Hi,
if I define a function with arbritary precision numbers (e.g., f=0.123456789123456789*log(1+x)) and then compute its Taylor expansion (f.taylor(x,0,5)) it seems to me that the coefficients are given in double precision, whereas if I compute them (e.g., by derivative(f,x,5)(x=0)/factorial(5)) they are in original precision. First of all, am I right or is it only a visualisation difference? If I'm right, is it possible to compute the Taylor expansion with the original precision?
Cheers,
MarcoMarco CaliariFri, 11 Jul 2014 01:58:26 -0500http://ask.sagemath.org/question/23370/find roots with arbitrary precisionhttp://ask.sagemath.org/question/9486/find-roots-with-arbitrary-precision/It seems that `find_root` will always convert its parameter range to `float`, even if `a` and `b` were originally given in some arbitrary precision real number field. Is there a variant of this algorithm that can find roots to arbitrary precision?
Even greater would be some algorithm that can make use of interval arithmetic based on `RealIntervalField`, which in particular will return an interval that is guaranteed to contain the zero. I have written some code along these lines:
def bisect_interval(f, i, d):
# find zero of function f in interval i with desired diameter d
# f must be monotonically increasing and must contain a zero in i
d2 = d/2
zero = i.parent()(0)
hi = i
while hi.absolute_diameter() > d2:
l, r = hi.bisection()
c = l.intersection(r)
fc = f(c)
if fc > zero:
hi = l
else:
hi = r
lo = i
while lo.absolute_diameter() > d2:
l, r = lo.bisection()
c = l.intersection(r)
fc = f(c)
if fc < zero:
lo = r
else:
lo = l
return lo.union(hi)
Am I reproducing functionality that's already available somewhere in Sage? If not, do you consider this functionality worth adding? Should it use some better algorithm than simple bisection? Is the algorithm even correct? You don't *have* to answer all of these questions, as my core question remains the one about an arbitrary precision version of `find_root`. But additional answers would be welcome.MvGTue, 30 Oct 2012 23:44:29 -0500http://ask.sagemath.org/question/9486/