1 | initial version |
Or e.g.
sage: srange(0,1,.0001)[-10:]
[0.998999999999906, 0.999099999999906, 0.999199999999906, 0.999299999999906, 0.999399999999906, 0.999499999999906, 0.999599999999906, 0.999699999999906, 0.999799999999906, 0.999899999999906]
The point is that these are well within tolerance for 53-bit precision. The numpy stuff is just tricking you with its rounding or something.
sage: R = RealField(100)
sage: srange(R(0),R(1),R(.0001))[-10:]
[0.99900000000000000000000000170, 0.99910000000000000000000000170, 0.99920000000000000000000000170, 0.99930000000000000000000000170, 0.99940000000000000000000000170, 0.99950000000000000000000000170, 0.99960000000000000000000000170, 0.99970000000000000000000000170, 0.99980000000000000000000000170, 0.99990000000000000000000000170]
I'm sure that someone who understands machine numbers better can say why it's these particular numbers, but that's pretty much the story, I believe. Is there any particular reason why 1.14199999999999
would be worse than 1.14200000000000
? If so, then you should use an exact ring (see the documentation for srange
and do something like
sage: srange(0,1,1/1000)[-10:]
[99/100, 991/1000, 124/125, 993/1000, 497/500, 199/200, 249/250, 997/1000, 499/500, 999/1000]
Good luck!