# Revision history [back]

Here are some remarks:

First, remove any time-related lines and variables from your function. You can later time it with:

sage: %time Newton_Raphson(f, 0.2):


or even

sage: %timeit Newton_Raphson(f, 0.2):


so that you get a race between many executions.

Then, you might want to run something like:

sage: %time Newton_Raphson(f,1)


and see that your code is lagging.

To understand what happens, let me suggest to add some debug information just after the while statement

print start


As you can see, the value of the start real number is expressed as a huge symbolic expression involving exponentials. So you can imagine that handling such expression and testing inequalities with them is very slow (and more and more slow since the size of the expression increases at each iteration).

Since you the Newton Raphson will never reach the zero of the function exactly, wht you are looking for is an approximation, so you don not care to have a long expression for such a point whose only virtue is to be close to the zero. So, yo may want a numerical approximation. For this, you can transform each of foo, dfoo and ddfoo as functions tranforming floating-point numbers into floating-point numbers, which you can do by using the fast_float function.

So just after defining dfoo and ddfoo, it suffice to add the lines:

foo = fast_float(foo)
dfoo = fast_float(dfoo)
ddfoo = fast_float(ddfoo)


Of course, you should not re-define foo before defining dfoo as Sage is not able to compute the derivative of a such numerical function, you prefer exact symbolic computations for such a computation of derivatives.

With such improvement, you can now get:

sage: %time Newton_Raphson(f,0)
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 3.25 ms
(0.8571428346650166, 7, 0.0)


Hmm, apparently the error is zero. The reason is that in your code there is:

start = NR
error = (NR - start)


so of course the error is zero ! perhaps should you permute those two lines.

Here are some remarks:

First, remove any time-related lines and variables from your function. You can later time it with:

sage: %time Newton_Raphson(f, 0.2):


or even

sage: %timeit Newton_Raphson(f, 0.2):


so that you get a race between many executions.

Then, you might want to run something like:

sage: %time Newton_Raphson(f,1)


and see that your code is lagging.

To understand what happens, let me suggest to add some debug information just after the while statementstatement:

print start


As you can see, the value of the start real number is expressed as a huge symbolic expression involving exponentials. So you can imagine that handling such expression and testing inequalities with them is very slow (and more and more slow since the size of the expression increases a lot at each iteration).

Since you the Newton Raphson will never reach the zero of the function exactly, wht you are looking for is an approximation, so you don not care to have a long expression for such a point whose only virtue is to be close to the zero. So, yo may want a numerical approximation. For this, you can transform each of foo, dfoo and ddfoo as functions tranforming floating-point numbers into floating-point numbers, which you can do by using the fast_float function.

So just after defining dfoo and ddfoo, it suffice to add the lines:

foo = fast_float(foo)
dfoo = fast_float(dfoo)
ddfoo = fast_float(ddfoo)


Of course, you should not re-define foo before defining dfoo as Sage is not able to compute the derivative of a such numerical function, you prefer exact symbolic computations for such a computation of derivatives.

With such improvement, you can now get:

sage: %time Newton_Raphson(f,0)
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 3.25 ms
(0.8571428346650166, 7, 0.0)


Hmm, apparently the error is zero. The reason is that in your code there is:

start = NR
error = (NR - start)


so of course the error is zero ! perhaps should you permute those two lines.

Here are some remarks:

First, remove any time-related lines and variables from your function. You can later time it with:

sage: %time Newton_Raphson(f, 0.2):
0.2)


or eveneven:

sage: %timeit Newton_Raphson(f, 0.2):
0.2)


so that you get a race between many executions.

Then, you might want to run something like:

sage: %time Newton_Raphson(f,1)


and see that your code is lagging.

To understand what happens, let me suggest to add some debug information just after the while statement:

print start


As you can see, the value of the start real number is expressed as a huge symbolic expression involving exponentials. So you can imagine that handling such expression and testing inequalities with them is very slow (and more and more slow since the size of the expression increases a lot at each iteration).

Since you the Newton Raphson will never reach the zero of the function exactly, wht you are looking for is an approximation, so you don not care to have a long expression for such a point whose only virtue is to be close to the zero. So, yo may want a numerical approximation. For this, you can transform each of foo, dfoo and ddfoo as functions tranforming floating-point numbers into floating-point numbers, which you can do by using the fast_float function.

So just after defining dfoo and ddfoo, it suffice to add the lines:

foo = fast_float(foo)
dfoo = fast_float(dfoo)
ddfoo = fast_float(ddfoo)


Of course, you should not re-define foo before defining dfoo as Sage is not able to compute the derivative of a such numerical function, you prefer exact symbolic computations for such a computation of derivatives.

With such improvement, you can now get:

sage: %time Newton_Raphson(f,0)
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 3.25 ms
(0.8571428346650166, 7, 0.0)


Hmm, apparently the error is zero. The reason is that in your code there is:

start = NR
error = (NR - start)


so of course the error is zero ! perhaps should you permute those two lines.