ASKSAGE: Sage Q&A Forum - Latest question feedhttp://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Thu, 10 Jan 2019 19:06:35 -0600Homomorphisms lifted from base ring in PowerSeriesRing do not preserve precisionhttp://ask.sagemath.org/question/45002/homomorphisms-lifted-from-base-ring-in-powerseriesring-do-not-preserve-precision/Hi all,
Homomorphisms which are lifted from the base ring seem to be unaware that precision exists in power/Laurent series rings. For example:
sage: R.<x> = PowerSeriesRing(ZZ)
sage: f = Hom(ZZ, ZZ)([1])
sage: Rf = Hom(R, R)(f); Rf
Ring endomorphism of Power Series Ring in x over Integer Ring
Defn: Induced from base ring by
Ring endomorphism of Integer Ring
Defn: 1 |--> 1
sage: Rf(1 + x + O(x^2))
1 + x
Can someone confirm that the expected output should be 1 + x + O(x^2), and that this is a bug?
Thanks,
Henryliu.henry.hlThu, 10 Jan 2019 19:06:35 -0600http://ask.sagemath.org/question/45002/a.N(3) DID WORK past now only N(a,3) ???http://ask.sagemath.org/question/44914/an3-did-work-past-now-only-na3/
so, I updated from 6.1 to 8.5 due 6.1 did not complied!
Old sheets dose not work due I used notation a.N(3)
and it's equalent N(a,3) works still ???
Where is the cat ?joniboySun, 06 Jan 2019 10:20:46 -0600http://ask.sagemath.org/question/44914/Incorrect result for comparison (precision issues?)http://ask.sagemath.org/question/32371/incorrect-result-for-comparison-precision-issues/ Consider this session:
sage: if log(10) * 248510777753 < log(1024) * 82553493450:
....: print 'Less'
....: else:
....: print 'Not less'
....:
Not less
or more simply:
sage: bool(log(10)*248510777753 < log(1024)*82553493450)
False
But this is wrong, as we can see with higher-precision arithmetic:
sage: import mpmath
sage: mpmath.mp.dps = 22 # or anything greater
sage: bool(mpmath.log(10)*248510777753 < mpmath.log(1024)*82553493450)
True
I guess this is happening because Sage is computing to some finite precision. But when writing some bigger program, it's scary that a condition involving variables, like say,
if m * q < n * p:
can without warning give the wrong result and take the wrong path. Is there a way to prevent this from happening, i.e. to make sure that in the program, comparisons are done using as many bits of precision as are necessary to evaluate them correctly, without us having to pre-specify a precision (which may be both too large and wasteful, or too small and give incorrect results)?ShreevatsaRThu, 28 Jan 2016 21:58:13 -0600http://ask.sagemath.org/question/32371/Precision plots - How do I do those?http://ask.sagemath.org/question/29231/precision-plots-how-do-i-do-those/Hi.
I have a function which varies so slowly, that in the range I'm interested only the 17th decimal changes.
The sws is this:
sage: mp = 139570180.
sage: mm = 105658371.5
sage: mn = 1.
sage: dmp = 350.
sage: dmm = 3.8
sage: dmn = 0.1
sage: l(a,b,c) = a^2 + b^2 + c^2 - 2*( a*b + b*c + c*a )
sage: f(x) = sqrt(l(mp^2, mm^2, x^2))/(2*mp)
sage: R=RealField(100)
sage: g = fast_callable(f, vars=[x], domain=R)
sage: plot( g, (x,0,2))
but I got the following plot, which is unsatisfactory in several ways!
![image description](/upfiles/14400980865379363.png)
How could I possible improve it? (ticks on vertical axis, a "continuous" line, ...)
Thank you!DoxThu, 20 Aug 2015 14:17:53 -0500http://ask.sagemath.org/question/29231/Set the precision of imported methodshttp://ask.sagemath.org/question/10760/set-the-precision-of-imported-methods/How to set the precision of "numpy" methods?
e.g., for calculating the singular values of a matrix using numpy methods
sage: R = RealField(100)
sage: R
Real Field with 100 bits of precision
sage: A = matrix(R ,2,2, [1.746, 0.940, 1.246, 1.898])
sage: A
[ 1.7460000000000000000000000000 0.94000000000000000000000000000]
[ 1.2460000000000000000000000000 1.8980000000000000000000000000]
sage: A.parent()
Full MatrixSpace of 2 by 2 dense matrices over Real Field
with 100 bits of precision
sage: A = np.array(A) # the precision (100 bits) does not preserve in "numpy"
sage: U,sig,V = numpy.linalg.svd(A)
sage: sig
array([ 1.08490731e+06, 1.97535694e+00])
# the precision (100 bits) does not preserve in "numpy", hey only 8 digits here
----------
[NumPy Data types](http://docs.scipy.org/doc/numpy/user/basics.types.html)
> sage: np.array([1, 2, 3], dtype='f')
> array([ 1., 2., 3.], dtype=float32)
That's a choice. However, I'd like some more convenient method like "numpy.set_digits(100)"...
----------
Thanks in advance!
gundamlhThu, 21 Nov 2013 04:01:49 -0600http://ask.sagemath.org/question/10760/precision when using numpyhttp://ask.sagemath.org/question/10542/precision-when-using-numpy/when using numpy, do i have to take care of the ranges for integers, floats, ..etc. and what if the number exceeds the limits for int64 or float64, what shall i do then ? and does this problem exist als for the normal integer data type in sage ('sage.rings.integer.Integer'> ) ???Mohamed GaafarFri, 25 Oct 2013 08:37:07 -0500http://ask.sagemath.org/question/10542/computing the digist of Pihttp://ask.sagemath.org/question/10274/computing-the-digist-of-pi/I tried to code the Salamin-Brent iteration. I calculated with exact numbers and the procedure returned the numerical value with the .n() function. The iteration was very very slow, for 20 iterations it is approximately 900 secs. (In Maple the same code is 0.06 secs). Can I modify the numerical precision in every step with some method? (Because in every step the number of correct digits is doubling )czsanMon, 24 Jun 2013 23:24:38 -0500http://ask.sagemath.org/question/10274/Accuracy versus precision of algebraic number calculationshttp://ask.sagemath.org/question/10054/accuracy-versus-precision-of-algebraic-number-calculations/Hi everyone - another very basic question from me ...
I am doing some calculations of absolute norms of determinants of matrices whose entries come from cyclotomic fields. The (rational) numbers which are output are sensible but occasionally have massive prime numbers as factors which I was not expecting. My question is whether I can rely upon such numbers when they are output by SAGE's calculations inside a specified number field, or whether somewhere along the way some imprecision may have been introduced which results in a "distorted" prime number being a factor of the output.
Another stylized way of asking the same thing: is it possible that the answer to some question involving a small prime p might actually contain a factor of p^100, but nevertheless because of accumulated rounding errors etc I have ended up with p^100-2 which happens to be prime? Or does SAGE "know" only to output perfect answers involving algebraic number fields, even when the heights involved are that big?
Many thanksGaryMakTue, 23 Apr 2013 00:27:41 -0500http://ask.sagemath.org/question/10054/rounding error .n() and N()http://ask.sagemath.org/question/9282/rounding-error-n-and-n/Why does 1.414 - sqrt(2).n(digits=4) not evaluate to zero?
sage: sqrt(2).n(digits=4)
1.414
sage: 1.414 - sqrt(2).n(digits=4)
-0.0002136
kjhudsonMon, 08 Apr 2013 04:53:32 -0500http://ask.sagemath.org/question/9282/Arbitrary precision with power functionhttp://ask.sagemath.org/question/9788/arbitrary-precision-with-power-function/Hello! Sorry for my english.
Why in Sage 5.6
> numerical_approx((3**2.72), digits=200)
gives
> 19.850425152727527944307439611293375492095947265625000000000000000000000\
000000000000000000000000000000000000000000000000000000000000000000000000\
000000000000000000000000000000000000000000000000000000000
and
> RealField(1000)(3**2.72)
gives
> 19.850425152727527944307439611293375492095947265625000000000000000000000\
000000000000000000000000000000000000000000000000000000000000000000000000\
000000000000000000000000000000000000000000000000000000000000000000000000\
000000000000000000000000000000000000000000000000000000000000000000000000\
0000000000000
? After digit 5 zero, zero, zero. How to get more digits in Sage?PlezzeRFri, 08 Feb 2013 14:53:21 -0600http://ask.sagemath.org/question/9788/Precision of find_roothttp://ask.sagemath.org/question/9328/precision-of-find_root/Here is a straightforward question:
I am wondering about the precision of find_root. Looking into the documentation (I'm using Sage v. 5.0), there is a comment along with one of the examples that the "precision isn't very good on some machines." At the same time, it says that the routine converges unless it throws an error.
A post [response](http://www.mail-archive.com/sage-support@googlegroups.com/msg16187.html) from William Stein clarifies that the function is passed to SciPy which has double precision. So I'm assuming that there may be a way for more precision on some machines, but that one can always count on having at least this much.TumericTJMon, 17 Sep 2012 15:28:51 -0500http://ask.sagemath.org/question/9328/adding real literal and real number of high precisionhttp://ask.sagemath.org/question/9185/adding-real-literal-and-real-number-of-high-precision/When Sage is adding a real literal to a real number of high precision, shouldn't it calculate the sum in the high precision ring? Instead, Sage seems to calculate in double precision:
RF=RealField(150); RF
Real Field with 150 bits of precision
RF(0.9 + RF(1e-18))
0.90000000000000002220446049250313080847263336
RF(1.0+ RF(1e-18))
1.0000000000000000000000000000000000000000000
RF(1+ RF(1e-18))
1.0000000000000000010000000000000000000000000
I'm trying to use high precision arithmetic (2658 bits) in Sage to verify some results produced by the high precision semidefinite program solver sdpa_gmp. Sage's treatment of real literals in these calculations has made me anxious about the possibility that I'm overlooking other ways in which the calculations might be unreliable.
Is there anywhere an explanation of Sage's treatment of real literals in high precision arithmetic?
Added:
Immediately after posting this question, the list of Related Questions in the sidebar pointed me to question/327/set-global-precision-for-reals where I learned that 'RealNumber = RF' would make all real literals lie in the high precision ring. Still, I wonder why the default behavior is to discard precision that is present in the original real literal.
thanks,
Daniel Friedan
thanks for Daniel FriedanFri, 27 Jul 2012 06:05:52 -0500http://ask.sagemath.org/question/9185/Why is 3e1 not equivalent to 30?http://ask.sagemath.org/question/8983/why-is-3e1-not-equivalent-to-30/I thought that 3e1 is completely equivalent to 30.
However, it is not:
sage: (1/30).n(digits=30)
0.0333333333333333333333333333333
sage: (1/3e1).n(digits=30)
0.0333333333333333328707404064062
Then I thought that 3e1 is always 53-bit real number or something like that.
But I was wrong again:
sage: 1/3e1.n(digits=30)
0.0333333333333333333333333333333
Now I am just confused. Is this a bug? If not, how should I
understand the second input above, and where can I find it documented? (Sage 5.0)kkumerFri, 01 Jun 2012 00:47:51 -0500http://ask.sagemath.org/question/8983/Arbitrary Precision Physics Calculationhttp://ask.sagemath.org/question/8719/arbitrary-precision-physics-calculation/I would like to do some physics problems in high precision. I cannot find any examples or sample worksheets that are accessable on a basic level. I just want real numbers ( including large and small in scientific notation ) I am pretty sure this is not too hard, perhaps it is too simple to be explained? Any help would be appreciated.russ_henselWed, 15 Feb 2012 08:10:52 -0600http://ask.sagemath.org/question/8719/Set global precision for reals?http://ask.sagemath.org/question/7887/set-global-precision-for-reals/Hello,
If this makes any difference, I am using the Sage Notebook.
I want to reuse a large piece of code. However, when I wrote it, the default precision seemed sufficient to me, and now I am encountering truncation errors. Can I specify a global precision at the beginning of the worksheet? My variable declarations are all like a,b=var('a,b').
Thanks!chaeslocThu, 20 Jan 2011 04:56:50 -0600http://ask.sagemath.org/question/7887/HowTo Compute Past Largest Cython Supported Wordsize (efficiently)?http://ask.sagemath.org/question/7600/howto-compute-past-largest-cython-supported-wordsize-efficiently/Say I compute over a domain where my largest Cython cdef is ___ unsigned bits. Having enjoyed the wonderful Cython speedup this long, does Cython let me continue at arbitrary precisions past the largest supported word size? Any contrived example will do (for an answer, assuming it works!) Please and thank you very much.ccanoncThu, 19 Aug 2010 18:29:16 -0500http://ask.sagemath.org/question/7600/