# Revision history [back]

### Incorrect result for comparison (precision issues?)

Consider this session:

sage: if log(10) * 248510777753 < log(1024) * 82553493450:
....:     print 'Less'
....: else:
....:     print 'Not less'
....:
Not less


or more simply:

sage: bool(log(10)*248510777753 < log(1024)*82553493450)
False


But this is wrong, as we can see with higher-precision arithmetic:

sage: import mpmath
sage: mpmath.mp.dps = 22 # or anything greater
sage: bool(mpmath.log(10)*248510777753 < mpmath.log(1024)*82553493450)
True


I guess this is happening because Sage is computing to some finite precision. But when writing some bigger program, it's scary that a condition involving variables, like say,

if m * q < n * p:


can without warning give the wrong result and take the wrong path. Is there a way to prevent this from happening, i.e. to make sure that in the program, comparisons are done using as many bits of precision as are necessary to evaluate them correctly, without us having to pre-specify a precision (which may be both too large and wasteful, or too small and give incorrect results)?

 2 retagged FrédéricC 2449 ●3 ●28 ●50

### Incorrect result for comparison (precision issues?)

Consider this session:

sage: if log(10) * 248510777753 < log(1024) * 82553493450:
....:     print 'Less'
....: else:
....:     print 'Not less'
....:
Not less


or more simply:

sage: bool(log(10)*248510777753 < log(1024)*82553493450)
False


But this is wrong, as we can see with higher-precision arithmetic:

sage: import mpmath
sage: mpmath.mp.dps = 22 # or anything greater
sage: bool(mpmath.log(10)*248510777753 < mpmath.log(1024)*82553493450)
True


I guess this is happening because Sage is computing to some finite precision. But when writing some bigger program, it's scary that a condition involving variables, like say,

if m * q < n * p:


can without warning give the wrong result and take the wrong path. Is there a way to prevent this from happening, i.e. to make sure that in the program, comparisons are done using as many bits of precision as are necessary to evaluate them correctly, without us having to pre-specify a precision (which may be both too large and wasteful, or too small and give incorrect results)?