What are the differences between RealDoubleField() and RealField(53) ?
Hi,
This question is related to question 2402 (still open!) and tries to collect differences between RDF=RealDoubleField() and RR=RealField(53). These are two floating point real number fields with both 53 bits of precision. The first one comes from the processor floating-point arithmetic, the second one is "emulated" by mpfr. They are assumed to follow the same rounding standards (to the nearest, according to the sagebook, but i may be wrong).
However, we can see some differences between them:
sage: RDF(1/10)*10 == RDF(1)
False
sage: RDF(1/10)*10 - RDF(1)
-1.11022302463e-16
sage: RR(1/10)*10 == RR(1)
True
sage: sage: RR(1/10)*10 - RR(1)
0.000000000000000
Could you explain that ? EDIT: this was a bug and it is now fixed, see trac ticket 14416.
There are also some specificities on which field should be used for some methods.
For example, it seems that the eignevalues are not well computed on RR, but are correctly computed on RDF (see trac #13660). What is the reason for that ?
Also, it seems that when dealing with huge matrices, the fast atlas library in only used when entries are in RDF, not in RR (see trac #10815).
Are there other difference that should be known between both implementations of floating point numbers in Sage ?
Thierry
It is worth noticing that the exponent is bounded in RDF and not in RR : sage: RDF(1/2^10000) == 0 True sage: RR(1/2^10000) == 0 False
As for matrices, I think that for a lot of these methods we send things to numpy and friends, which only do things in double precision... something along those lines, I'm not precisely sure. Some methods are even only available for `RDF`.