By looking at the source code directly in Sage:

```
sage: find_fit??
```

or online here one sees that the first thing that happens to your data

```
sage: R = RealField(1000)
sage: data = [[R(1),R(5)],[R(5),sqrt(R(6))],[R(8),R(9)]]
[[1.000000000000...00, 5.000000000000...00],
[5.000000000000...00, 2.449489742783...84],
[8.000000000000...00, 9.000000000000...00]]
```

is

```
sage: import numpy
sage: data = numpy.array(data, dtype = float)
sage: data
array([[ 1. , 5. ],
[ 5. , 2.44948974],
[ 8. , 9. ]])
```

Therefore, all the precision you gave is lost from the start :(

Then, few lines later, your problem is reduced to a least square problem and the function `scipy.optimize.leastsq`

from scipy is used to solve your problem (*which in turn uses MINPACK's lmdif and lmder algorithms* according to the doc):

```
from scipy.optimize import leastsq
estimated_params, d = leastsq(error_function, initial_guess, args = (x_data, y_data))
```

Then, by looking at the documentation of leastsq:

```
sage: from scipy.optimize import leastsq
sage: leastsq?
```

You can see that some options that you may be searching for are available:

```
ftol : float, optional
Relative error desired in the sum of squares.
xtol : float, optional
Relative error desired in the approximate solution.
gtol : float, optional
Orthogonality desired between the function vector and the
columns of the Jacobian.
```

The documentation `leastsq?`

also gives you the default values that you may want to change: `ftol=1.49012e-08, xtol=1.49012e-08, gtol=0.0`

.

So now the next steps are that you redo on your own the reduction to the least square problem done in `find_fit`

without losing the precision by *not* casting entries to float. Then, you change the default values for `ftol`

, `xtol`

, `gtol`

in the call to `leastsq`

to see if you are able achieve your goal in increasing the precision of the result.

Then, if you succeed, you may consider proposing an improvement to the Sage function `find_fit`

.