Hi,
This question is related to question 2402 (still open!) and tries to collect differences between RDF=RealDoubleField() and RR=RealField(53). These are two floating point real number fields with both 53 bits of precision. The first one comes from the processor floating-point arithmetic, the second one is "emulated" by mpfr. They are assumed to follow the same rounding standards (to the nearest, according to the sagebook, but i may be wrong).
However, we can see some differences between them:
sage: RDF(1/10)*10 == RDF(1)
False
sage: RDF(1/10)*10 - RDF(1)
-1.11022302463e-16
sage: RR(1/10)*10 == RR(1)
True
sage: sage: RR(1/10)*10 - RR(1)
0.000000000000000
Could you explain that ?
There are also some specificities on which field should be used for some methods.
For example, it seems that the eignevalues are not well computed on RR, but are correctly computed on RDF (see trac #13660). What is the reason for that ?
Also, it seems that when dealing with huge matrices, the fast atlas library in only used when entries are in RDF, not in RR (see trac #10815).
Are there other difference that should be known between both implementations of floating point numbers in Sage ?
Thierry