Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

It's worth mentioning that the scipy multidimensional optimizers like broyden2 are local optimizers and so your results are often very sensitive to initial conditions.

For example (and using the "f(*z)" syntax to turn the vector of arguments scipy.optimize gives into the arguments f needs:

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol=vector(optimize.broyden2(lambda z: f(*z), [0., 0.])) # note 0. not 0, to get the coercion right
sage: sol
(-0.445049106949, 1.09664015201)
sage: f(*sol)
(1.29470885961, 3.64766872995)
sage: vector(f(*sol)).norm()
3.87062762283
sage: 
sage: for startloc in CartesianProduct([(-3.)..(0.)], [(-3.)..(0.)]):
....:    sol=vector(optimize.broyden2(lambda z: f(*z), startloc))
....:    print startloc, sol, f(*sol), vector(f(*sol)).norm()
....: 
[-3.00000000000000, -3.00000000000000] (0.592528157331, -0.832520486937) (-0.481430869706, 2.10056220384) 2.15502604497
[-3.00000000000000, -2.00000000000000] (0.284633829217, -0.299571581674) (-0.218555164939, 1.80510930333) 1.81829204395
[-3.00000000000000, -1.00000000000000] (1.77544832692, -0.455717154906) (2.69649960665, 0.432229798359) 2.73092158936
[-3.00000000000000, 0.000000000000000] (-0.262258372557, 2.46143841915) (2.53021787312, 8.32093746381) 8.69712612086
[-2.00000000000000, -3.00000000000000] (0.475497108447, 3.03104628731) (3.25714378745, 10.7117444873) 11.1960017691
[-2.00000000000000, -2.00000000000000] (-1.12430279872, -0.185723262141) (1.07833352106, 3.15879592882) 3.33778293221
[-2.00000000000000, -1.00000000000000] (0.0922121941783, 0.648230430052) (0.656733518807, 2.32799049627) 2.41885069102
[-2.00000000000000, 0.000000000000000] (-0.181200179841, 1.00534124397) (1.03817474914, 3.19191119667) 3.35650173502
[-1.00000000000000, -3.00000000000000] (-0.658431945525, -0.19952862808) (0.234003998808, 2.69824361895) 2.70837155846
[-1.00000000000000, -2.00000000000000] (-0.2997258552, 1.45003224177) (1.53986783005, 4.40231935738) 4.66386198963
[-1.00000000000000, -1.00000000000000] (-0.184337236075, 0.0620231427285) (0.0960033593323, 2.18818410631) 2.19028909692
[-1.00000000000000, 0.000000000000000] (3.07457749063, -4.84309303007) (4.60993371584, 22.3809726073) 22.8508079444
[0.000000000000000, -3.00000000000000] (2.08531742062, 8.58333035147) (12.9318790962, 73.5882425019) 74.7158813873
[0.000000000000000, -2.00000000000000] (-0.625399327611, 1.17293600206) (1.56406032104, 4.00117819254) 4.29601112851
[0.000000000000000, -1.00000000000000] (-1.64468007664, -1.36976452276) (1.33520803175, 5.52093492444) 5.6800970879
[0.000000000000000, 0.000000000000000] (-0.445049106949, 1.09664015201) (1.29470885961, 3.64766872995) 3.87062762283
# norms all over the place!

I would actually recommend using the higher-level 'minimize' function rather than the internal scipy ones directly: it does a better job wrapping Sage-native objects. http://www.sagemath.org/doc/reference/sage/numerical/optimize.html

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol = minimize(norm(f), [0., 0.]) # add disp=0 to get it to be quiet
Optimization terminated successfully.
         Current function value: 1.288361
         Iterations: 6
         Function evaluations: 9
         Gradient evaluations: 9
sage: sol
(0.935264263401, -0.26730415417)
sage: f(*sol)
(0.607415088225, 1.13618724744)
sage: vector(f(*sol)).norm()
1.28836118796

I'm too lazy to figure out whether that's the global minimum or not, but a quick brute force scan suggests it must have gotten pretty close. (IMHO we should probably wrap OpenOpt or something; optimization is an important enough problem to be worth doing well.)

It's worth mentioning that the scipy multidimensional optimizers like broyden2 are local optimizers and so your results are often very sensitive to initial conditions.

For example (and using the "f(*z)" syntax to turn the vector of arguments scipy.optimize gives into the arguments f needs:

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol=vector(optimize.broyden2(lambda z: f(*z), [0., 0.])) # note 0. not 0, to get the coercion right
sage: sol
(-0.445049106949, 1.09664015201)
sage: f(*sol)
(1.29470885961, 3.64766872995)
sage: vector(f(*sol)).norm()
3.87062762283
sage: 
sage: for startloc in CartesianProduct([(-3.)..(0.)], [(-3.)..(0.)]):
....:    sol=vector(optimize.broyden2(lambda z: f(*z), startloc))
....:    print startloc, sol, f(*sol), vector(f(*sol)).norm()
....: 
[-3.00000000000000, -3.00000000000000] (0.592528157331, -0.832520486937) (-0.481430869706, 2.10056220384) 2.15502604497
[-3.00000000000000, -2.00000000000000] (0.284633829217, -0.299571581674) (-0.218555164939, 1.80510930333) 1.81829204395
[-3.00000000000000, -1.00000000000000] (1.77544832692, -0.455717154906) (2.69649960665, 0.432229798359) 2.73092158936
[-3.00000000000000, 0.000000000000000] (-0.262258372557, 2.46143841915) (2.53021787312, 8.32093746381) 8.69712612086
[-2.00000000000000, -3.00000000000000] (0.475497108447, 3.03104628731) (3.25714378745, 10.7117444873) 11.1960017691
[-2.00000000000000, -2.00000000000000] (-1.12430279872, -0.185723262141) (1.07833352106, 3.15879592882) 3.33778293221
[-2.00000000000000, -1.00000000000000] (0.0922121941783, 0.648230430052) (0.656733518807, 2.32799049627) 2.41885069102
[-2.00000000000000, 0.000000000000000] (-0.181200179841, 1.00534124397) (1.03817474914, 3.19191119667) 3.35650173502
[-1.00000000000000, -3.00000000000000] (-0.658431945525, -0.19952862808) (0.234003998808, 2.69824361895) 2.70837155846
[-1.00000000000000, -2.00000000000000] (-0.2997258552, 1.45003224177) (1.53986783005, 4.40231935738) 4.66386198963
[-1.00000000000000, -1.00000000000000] (-0.184337236075, 0.0620231427285) (0.0960033593323, 2.18818410631) 2.19028909692
[-1.00000000000000, 0.000000000000000] (3.07457749063, -4.84309303007) (4.60993371584, 22.3809726073) 22.8508079444
[0.000000000000000, -3.00000000000000] (2.08531742062, 8.58333035147) (12.9318790962, 73.5882425019) 74.7158813873
[0.000000000000000, -2.00000000000000] (-0.625399327611, 1.17293600206) (1.56406032104, 4.00117819254) 4.29601112851
[0.000000000000000, -1.00000000000000] (-1.64468007664, -1.36976452276) (1.33520803175, 5.52093492444) 5.6800970879
[0.000000000000000, 0.000000000000000] (-0.445049106949, 1.09664015201) (1.29470885961, 3.64766872995) 3.87062762283
# norms all over the place!

I would actually recommend using the higher-level 'minimize' function rather than the internal scipy ones directly: it does a better job wrapping Sage-native objects. http://www.sagemath.org/doc/reference/sage/numerical/optimize.html

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol = minimize(norm(f), [0., 0.]) # add disp=0 to get it to be quiet
Optimization terminated successfully.
         Current function value: 1.288361
         Iterations: 6
         Function evaluations: 9
         Gradient evaluations: 9
sage: sol
(0.935264263401, -0.26730415417)
sage: f(*sol)
(0.607415088225, 1.13618724744)
sage: vector(f(*sol)).norm()
1.28836118796

I'm too lazy to figure out whether that's the global minimum or not, but a quick brute force scan suggests it must have gotten pretty close. (IMHO we should probably wrap OpenOpt or something; optimization is an important enough problem to be worth doing well.)

I should mention that this is the second google link for "sage math optimize", and also the second if you type "optimize" into the reference manual search box.

It's worth mentioning that the scipy multidimensional optimizers like broyden2 are local optimizers and so your results are often very sensitive to initial conditions.

For example (and using the "f(*z)" syntax to turn the vector of arguments scipy.optimize gives into the arguments f needs:

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol=vector(optimize.broyden2(lambda z: f(*z), [0., 0.])) # note 0. not 0, to get the coercion right
sage: sol
(-0.445049106949, 1.09664015201)
sage: f(*sol)
(1.29470885961, 3.64766872995)
sage: vector(f(*sol)).norm()
3.87062762283
sage: 
sage: for startloc in CartesianProduct([(-3.)..(0.)], [(-3.)..(0.)]):
....:    sol=vector(optimize.broyden2(lambda z: f(*z), startloc))
....:    print startloc, sol, f(*sol), vector(f(*sol)).norm()
....: 
[-3.00000000000000, -3.00000000000000] (0.592528157331, -0.832520486937) (-0.481430869706, 2.10056220384) 2.15502604497
[-3.00000000000000, -2.00000000000000] (0.284633829217, -0.299571581674) (-0.218555164939, 1.80510930333) 1.81829204395
[-3.00000000000000, -1.00000000000000] (1.77544832692, -0.455717154906) (2.69649960665, 0.432229798359) 2.73092158936
[-3.00000000000000, 0.000000000000000] (-0.262258372557, 2.46143841915) (2.53021787312, 8.32093746381) 8.69712612086
[-2.00000000000000, -3.00000000000000] (0.475497108447, 3.03104628731) (3.25714378745, 10.7117444873) 11.1960017691
[-2.00000000000000, -2.00000000000000] (-1.12430279872, -0.185723262141) (1.07833352106, 3.15879592882) 3.33778293221
[-2.00000000000000, -1.00000000000000] (0.0922121941783, 0.648230430052) (0.656733518807, 2.32799049627) 2.41885069102
[-2.00000000000000, 0.000000000000000] (-0.181200179841, 1.00534124397) (1.03817474914, 3.19191119667) 3.35650173502
[-1.00000000000000, -3.00000000000000] (-0.658431945525, -0.19952862808) (0.234003998808, 2.69824361895) 2.70837155846
[-1.00000000000000, -2.00000000000000] (-0.2997258552, 1.45003224177) (1.53986783005, 4.40231935738) 4.66386198963
[-1.00000000000000, -1.00000000000000] (-0.184337236075, 0.0620231427285) (0.0960033593323, 2.18818410631) 2.19028909692
[-1.00000000000000, 0.000000000000000] (3.07457749063, -4.84309303007) (4.60993371584, 22.3809726073) 22.8508079444
[0.000000000000000, -3.00000000000000] (2.08531742062, 8.58333035147) (12.9318790962, 73.5882425019) 74.7158813873
[0.000000000000000, -2.00000000000000] (-0.625399327611, 1.17293600206) (1.56406032104, 4.00117819254) 4.29601112851
[0.000000000000000, -1.00000000000000] (-1.64468007664, -1.36976452276) (1.33520803175, 5.52093492444) 5.6800970879
[0.000000000000000, 0.000000000000000] (-0.445049106949, 1.09664015201) (1.29470885961, 3.64766872995) 3.87062762283
# norms all over the place!

I would actually recommend using the higher-level 'minimize' function rather than the internal scipy ones directly: it does a better job wrapping Sage-native objects. http://www.sagemath.org/doc/reference/sage/numerical/optimize.html

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol = minimize(norm(f), [0., 0.]) # add disp=0 to get it to be quiet
Optimization terminated successfully.
         Current function value: 1.288361
         Iterations: 6
         Function evaluations: 9
         Gradient evaluations: 9
sage: sol
(0.935264263401, -0.26730415417)
sage: f(*sol)
(0.607415088225, 1.13618724744)
sage: vector(f(*sol)).norm()
1.28836118796

I'm too lazy to figure out whether that's the global minimum or not, but a quick brute force scan suggests it must have gotten pretty close. (IMHO we should probably wrap OpenOpt or something; optimization is an important enough problem to be worth doing well.)

I should mention Note that this is the second google link for "sage math optimize", and also the second if you type "optimize" into the reference manual search box.

It's worth mentioning that the scipy multidimensional optimizers like broyden2 are local optimizers and so your results are often very sensitive to initial conditions.

For example (and using the "f(*z)" syntax to turn the vector of arguments scipy.optimize gives into the arguments f needs:

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol=vector(optimize.broyden2(lambda z: f(*z), [0., 0.])) # note 0. not 0, to get the coercion right
sage: sol
(-0.445049106949, 1.09664015201)
sage: f(*sol)
(1.29470885961, 3.64766872995)
sage: vector(f(*sol)).norm()
3.87062762283
sage: 
sage: for startloc in CartesianProduct([(-3.)..(0.)], [(-3.)..(0.)]):
....:    sol=vector(optimize.broyden2(lambda z: f(*z), startloc))
....:    print startloc, sol, f(*sol), vector(f(*sol)).norm()
....: 
[-3.00000000000000, -3.00000000000000] (0.592528157331, -0.832520486937) (-0.481430869706, 2.10056220384) 2.15502604497
[-3.00000000000000, -2.00000000000000] (0.284633829217, -0.299571581674) (-0.218555164939, 1.80510930333) 1.81829204395
[-3.00000000000000, -1.00000000000000] (1.77544832692, -0.455717154906) (2.69649960665, 0.432229798359) 2.73092158936
[-3.00000000000000, 0.000000000000000] (-0.262258372557, 2.46143841915) (2.53021787312, 8.32093746381) 8.69712612086
[-2.00000000000000, -3.00000000000000] (0.475497108447, 3.03104628731) (3.25714378745, 10.7117444873) 11.1960017691
[-2.00000000000000, -2.00000000000000] (-1.12430279872, -0.185723262141) (1.07833352106, 3.15879592882) 3.33778293221
[-2.00000000000000, -1.00000000000000] (0.0922121941783, 0.648230430052) (0.656733518807, 2.32799049627) 2.41885069102
[-2.00000000000000, 0.000000000000000] (-0.181200179841, 1.00534124397) (1.03817474914, 3.19191119667) 3.35650173502
[-1.00000000000000, -3.00000000000000] (-0.658431945525, -0.19952862808) (0.234003998808, 2.69824361895) 2.70837155846
[-1.00000000000000, -2.00000000000000] (-0.2997258552, 1.45003224177) (1.53986783005, 4.40231935738) 4.66386198963
[-1.00000000000000, -1.00000000000000] (-0.184337236075, 0.0620231427285) (0.0960033593323, 2.18818410631) 2.19028909692
[-1.00000000000000, 0.000000000000000] (3.07457749063, -4.84309303007) (4.60993371584, 22.3809726073) 22.8508079444
[0.000000000000000, -3.00000000000000] (2.08531742062, 8.58333035147) (12.9318790962, 73.5882425019) 74.7158813873
[0.000000000000000, -2.00000000000000] (-0.625399327611, 1.17293600206) (1.56406032104, 4.00117819254) 4.29601112851
[0.000000000000000, -1.00000000000000] (-1.64468007664, -1.36976452276) (1.33520803175, 5.52093492444) 5.6800970879
[0.000000000000000, 0.000000000000000] (-0.445049106949, 1.09664015201) (1.29470885961, 3.64766872995) 3.87062762283
# norms all over the place!

I would actually recommend using the higher-level 'minimize' function rather than the internal scipy ones directly: it does a better job wrapping Sage-native objects. http://www.sagemath.org/doc/reference/sage/numerical/optimize.html

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol = minimize(norm(f), [0., 0.]) # add disp=0 to get it to be quiet
Optimization terminated successfully.
         Current function value: 1.288361
         Iterations: 6
         Function evaluations: 9
         Gradient evaluations: 9
sage: sol
(0.935264263401, -0.26730415417)
sage: f(*sol)
(0.607415088225, 1.13618724744)
sage: vector(f(*sol)).norm()
1.28836118796

I'm too lazy to figure out whether that's the global minimum or not, but a quick brute force scan suggests it must have gotten pretty close. (IMHO we should probably wrap OpenOpt or something; optimization is an important enough problem to be worth doing well.)

Note that this is the second google link for "sage math optimize", and also the second if you type "optimize" into the reference manual search box.

It's worth mentioning that the scipy multidimensional optimizers like broyden2 are often very sensitive to initial conditions.

For example (and using the "f(*z)" syntax to turn the vector of arguments scipy.optimize gives into the arguments f needs:

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol=vector(optimize.broyden2(lambda z: f(*z), [0., 0.])) # note 0. not 0, to get the coercion right
sage: sol
(-0.445049106949, 1.09664015201)
sage: f(*sol)
(1.29470885961, 3.64766872995)
sage: vector(f(*sol)).norm()
3.87062762283
sage: 
sage: for startloc in CartesianProduct([(-3.)..(0.)], [(-3.)..(0.)]):
....:    sol=vector(optimize.broyden2(lambda z: f(*z), startloc))
....:    print startloc, sol, f(*sol), vector(f(*sol)).norm()
....: 
[-3.00000000000000, -3.00000000000000] (0.592528157331, -0.832520486937) (-0.481430869706, 2.10056220384) 2.15502604497
[-3.00000000000000, -2.00000000000000] (0.284633829217, -0.299571581674) (-0.218555164939, 1.80510930333) 1.81829204395
[-3.00000000000000, -1.00000000000000] (1.77544832692, -0.455717154906) (2.69649960665, 0.432229798359) 2.73092158936
[-3.00000000000000, 0.000000000000000] (-0.262258372557, 2.46143841915) (2.53021787312, 8.32093746381) 8.69712612086
[-2.00000000000000, -3.00000000000000] (0.475497108447, 3.03104628731) (3.25714378745, 10.7117444873) 11.1960017691
[-2.00000000000000, -2.00000000000000] (-1.12430279872, -0.185723262141) (1.07833352106, 3.15879592882) 3.33778293221
[-2.00000000000000, -1.00000000000000] (0.0922121941783, 0.648230430052) (0.656733518807, 2.32799049627) 2.41885069102
[-2.00000000000000, 0.000000000000000] (-0.181200179841, 1.00534124397) (1.03817474914, 3.19191119667) 3.35650173502
[-1.00000000000000, -3.00000000000000] (-0.658431945525, -0.19952862808) (0.234003998808, 2.69824361895) 2.70837155846
[-1.00000000000000, -2.00000000000000] (-0.2997258552, 1.45003224177) (1.53986783005, 4.40231935738) 4.66386198963
[-1.00000000000000, -1.00000000000000] (-0.184337236075, 0.0620231427285) (0.0960033593323, 2.18818410631) 2.19028909692
[-1.00000000000000, 0.000000000000000] (3.07457749063, -4.84309303007) (4.60993371584, 22.3809726073) 22.8508079444
[0.000000000000000, -3.00000000000000] (2.08531742062, 8.58333035147) (12.9318790962, 73.5882425019) 74.7158813873
[0.000000000000000, -2.00000000000000] (-0.625399327611, 1.17293600206) (1.56406032104, 4.00117819254) 4.29601112851
[0.000000000000000, -1.00000000000000] (-1.64468007664, -1.36976452276) (1.33520803175, 5.52093492444) 5.6800970879
[0.000000000000000, 0.000000000000000] (-0.445049106949, 1.09664015201) (1.29470885961, 3.64766872995) 3.87062762283
# norms all over the place!

I would actually recommend using the higher-level 'minimize' function rather than the internal scipy ones directly: it does a better job wrapping Sage-native objects. http://www.sagemath.org/doc/reference/sage/numerical/optimize.html

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol = minimize(norm(f), [0., 0.]) # add disp=0 to get it to be quiet
Optimization terminated successfully.
         Current function value: 1.288361
         Iterations: 6
         Function evaluations: 9
         Gradient evaluations: 9
sage: sol
(0.935264263401, -0.26730415417)
sage: f(*sol)
(0.607415088225, 1.13618724744)
sage: vector(f(*sol)).norm()
1.28836118796

I'm too lazy to figure out whether that's the global minimum or not, but a quick brute force scan suggests it must have gotten pretty close. (IMHO we should probably wrap OpenOpt or something; optimization is an important enough problem to be worth doing well.)

Note that this is was the second google link for "sage math optimize", and also the second if you type returned after typing "optimize" into the reference manual search box.

It's worth mentioning that the scipy multidimensional optimizers like broyden2 are often very sensitive to initial conditions.

For example (and using the "f(*z)" syntax to turn the vector of arguments scipy.optimize gives into the arguments f needs:

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol=vector(optimize.broyden2(lambda z: f(*z), [0., 0.])) # note 0. not 0, to get the coercion right
sage: sol
(-0.445049106949, 1.09664015201)
sage: f(*sol)
(1.29470885961, 3.64766872995)
sage: vector(f(*sol)).norm()
3.87062762283
sage: 
sage: for startloc in CartesianProduct([(-3.)..(0.)], [(-3.)..(0.)]):
....:    sol=vector(optimize.broyden2(lambda z: f(*z), startloc))
....:    print startloc, sol, f(*sol), vector(f(*sol)).norm()
....: 
[-3.00000000000000, -3.00000000000000] (0.592528157331, -0.832520486937) (-0.481430869706, 2.10056220384) 2.15502604497
[-3.00000000000000, -2.00000000000000] (0.284633829217, -0.299571581674) (-0.218555164939, 1.80510930333) 1.81829204395
[-3.00000000000000, -1.00000000000000] (1.77544832692, -0.455717154906) (2.69649960665, 0.432229798359) 2.73092158936
[-3.00000000000000, 0.000000000000000] (-0.262258372557, 2.46143841915) (2.53021787312, 8.32093746381) 8.69712612086
[-2.00000000000000, -3.00000000000000] (0.475497108447, 3.03104628731) (3.25714378745, 10.7117444873) 11.1960017691
[-2.00000000000000, -2.00000000000000] (-1.12430279872, -0.185723262141) (1.07833352106, 3.15879592882) 3.33778293221
[-2.00000000000000, -1.00000000000000] (0.0922121941783, 0.648230430052) (0.656733518807, 2.32799049627) 2.41885069102
[-2.00000000000000, 0.000000000000000] (-0.181200179841, 1.00534124397) (1.03817474914, 3.19191119667) 3.35650173502
[-1.00000000000000, -3.00000000000000] (-0.658431945525, -0.19952862808) (0.234003998808, 2.69824361895) 2.70837155846
[-1.00000000000000, -2.00000000000000] (-0.2997258552, 1.45003224177) (1.53986783005, 4.40231935738) 4.66386198963
[-1.00000000000000, -1.00000000000000] (-0.184337236075, 0.0620231427285) (0.0960033593323, 2.18818410631) 2.19028909692
[-1.00000000000000, 0.000000000000000] (3.07457749063, -4.84309303007) (4.60993371584, 22.3809726073) 22.8508079444
[0.000000000000000, -3.00000000000000] (2.08531742062, 8.58333035147) (12.9318790962, 73.5882425019) 74.7158813873
[0.000000000000000, -2.00000000000000] (-0.625399327611, 1.17293600206) (1.56406032104, 4.00117819254) 4.29601112851
[0.000000000000000, -1.00000000000000] (-1.64468007664, -1.36976452276) (1.33520803175, 5.52093492444) 5.6800970879
[0.000000000000000, 0.000000000000000] (-0.445049106949, 1.09664015201) (1.29470885961, 3.64766872995) 3.87062762283
# norms all over the place!

I would actually recommend using the higher-level 'minimize' function which calls some of the scipy routines rather than the internal scipy ones directly: it does a better job wrapping Sage-native objects. http://www.sagemath.org/doc/reference/sage/numerical/optimize.html

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol = minimize(norm(f), [0., 0.]) # add disp=0 to get it to be quiet
Optimization terminated successfully.
         Current function value: 1.288361
         Iterations: 6
         Function evaluations: 9
         Gradient evaluations: 9
sage: sol
(0.935264263401, -0.26730415417)
sage: f(*sol)
(0.607415088225, 1.13618724744)
sage: vector(f(*sol)).norm()
1.28836118796

I'm too lazy to figure out whether that's the global minimum or not, but a quick brute force scan suggests it must have gotten pretty close. (IMHO we should probably wrap OpenOpt or something; optimization is an important enough problem to be worth doing well.)

Note that this was the second link returned after typing "optimize" into the reference manual search box.