# Difference between RealNumber and RealLiteral

I discovered this tonight.

sage: L = srange(-1,1,.01)
sage: type(L[50])
<type 'sage.rings.real_mpfr.RealNumber'>
sage: type(-.5)
<type 'sage.rings.real_mpfr.RealLiteral'>


and hence

sage: -.5 in L
False


In some sense I'm cool with that, but in another sense I'm very annoyed. (Especially since I probably should have known about it but missed this.) So ... what exactly is the difference between these types, and when should they each be used? You might see my confusion given this result:

sage: L[50] in RR
True
sage: -.5 in RR
True


Oh, and this is really awesome:

sage: type(RealNumber(-.5))
<type 'sage.rings.real_mpfr.RealLiteral'>


Also, perhaps if anyone ever makes the full list of real numbers types as in Question 9950, one could add these.

edit retag close merge delete

Sort by » oldest newest most voted

I don't think that

sage: -0.5 in srange(-1,1,.01)
False


is because a mismatch in type. I think this is because .01 is not exactly 10^(-2) because the latter isn't exactly representable as an IEEE float). Hence we have:

sage: -0.5 - srange(-1,1,.01)[50]
-4.44089209850063e-16


(adding .01 50 times to -1 yields a different float). The types are taken care of:

sage: -0.5 in srange(-10,10,0.5)
True


Don't use equality testing (and hence don't use "in") when working with floats. Also: don't use srange with floats. You'll lose precision.

sage: [ (n*0.01)-a for n,a in enumerate(srange(0,0.2,0.01))]
[0.000000000000000,
0.000000000000000,
0.000000000000000,
0.000000000000000,
0.000000000000000,
0.000000000000000,
-6.93889390390723e-18,
0.000000000000000,
0.000000000000000,
0.000000000000000,
1.38777878078145e-17,
1.38777878078145e-17,
1.38777878078145e-17,
2.77555756156289e-17,
2.77555756156289e-17,
0.000000000000000,
0.000000000000000,
0.000000000000000,
-2.77555756156289e-17,
-2.77555756156289e-17]


(it gets worse as you go further). srange could in principle test its arguments and revert to a more stable generation formula in case the arguments are floats.

more

I see. So the RealLiteral given 0.5 is attempting to represent an actual real number, as opposed to the machine number? I never realized Sage actually tried to do that - I thought all RR elements automatically should be considered to only have 53-bit accuracy. Very interesting.

( 2017-03-03 07:48:52 -0600 )edit

Also, I definitely was not using this in a production application that needed precision - just creating an interactive graphic for class that wasn't behaving correctly because of this! I see now it is because srange iteratively adds the float, as opposed to making the literals and then turning them into floats - not what I would have suspected. Strangely, I've never really paid much attention to srange. Thank you!

( 2017-03-03 07:50:58 -0600 )edit

Yes, although RealLiteral is rather limited in which real numbers it can represent (I think only decimal, or if you manage to specify a different base in a float string, 'n'-ary floats for small values of 'n'. It just keeps the string representation for casting into different precision float rings). The problem here is not being caused by the difference between RealLiteral and RealNumber. It's just accumulating errors due to repeated addition.

( 2017-03-03 11:20:44 -0600 )edit

Well, it was caused by that difference in the sense that I didn't realize repeated addition was what srange did :)

( 2017-03-04 16:33:08 -0600 )edit