Ask Your Question
2

solve multidimensional nonlinear system with scipy

asked 2011-01-17 17:49:03 +0100

stanislav gravatar image

updated 2015-04-07 16:47:44 +0100

niles gravatar image

Hello, I'm try to solve a nonlinear system of equitations numerically with sage. But I can not see some easy ways to do it.

All I have found is to use scipy.optimize library to do it, but I can not use this correctly with sage functions.

As a small example:

sage: f(x,y) = (x^2+y,y^2-x+2)
sage: optimize.broyden2(f,[0,0])
Traceback (click to the left of this block for traceback)
...
TypeError: no canonical coercion from <type 'tuple'> to Callable
function ring with arguments (x, y)

How shell I define my function f to use the broyden2 method well?

Or are there some other possibilities?

I thought that sage is a all in one solution with algebra and numeric math methods.

Can someone help me please.

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
3

answered 2011-01-17 22:29:32 +0100

DSM gravatar image

updated 2011-01-17 22:46:01 +0100

It's worth mentioning that the scipy multidimensional optimizers like broyden2 are often very sensitive to initial conditions.

For example (and using the "f(*z)" syntax to turn the vector of arguments scipy.optimize gives into the arguments f needs:

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol=vector(optimize.broyden2(lambda z: f(*z), [0., 0.])) # note 0. not 0, to get the coercion right
sage: sol
(-0.445049106949, 1.09664015201)
sage: f(*sol)
(1.29470885961, 3.64766872995)
sage: vector(f(*sol)).norm()
3.87062762283
sage: 
sage: for startloc in CartesianProduct([(-3.)..(0.)], [(-3.)..(0.)]):
....:    sol=vector(optimize.broyden2(lambda z: f(*z), startloc))
....:    print startloc, sol, f(*sol), vector(f(*sol)).norm()
....: 
[-3.00000000000000, -3.00000000000000] (0.592528157331, -0.832520486937) (-0.481430869706, 2.10056220384) 2.15502604497
[-3.00000000000000, -2.00000000000000] (0.284633829217, -0.299571581674) (-0.218555164939, 1.80510930333) 1.81829204395
[-3.00000000000000, -1.00000000000000] (1.77544832692, -0.455717154906) (2.69649960665, 0.432229798359) 2.73092158936
[-3.00000000000000, 0.000000000000000] (-0.262258372557, 2.46143841915) (2.53021787312, 8.32093746381) 8.69712612086
[-2.00000000000000, -3.00000000000000] (0.475497108447, 3.03104628731) (3.25714378745, 10.7117444873) 11.1960017691
[-2.00000000000000, -2.00000000000000] (-1.12430279872, -0.185723262141) (1.07833352106, 3.15879592882) 3.33778293221
[-2.00000000000000, -1.00000000000000] (0.0922121941783, 0.648230430052) (0.656733518807, 2.32799049627) 2.41885069102
[-2.00000000000000, 0.000000000000000] (-0.181200179841, 1.00534124397) (1.03817474914, 3.19191119667) 3.35650173502
[-1.00000000000000, -3.00000000000000] (-0.658431945525, -0.19952862808) (0.234003998808, 2.69824361895) 2.70837155846
[-1.00000000000000, -2.00000000000000] (-0.2997258552, 1.45003224177) (1.53986783005, 4.40231935738) 4.66386198963
[-1.00000000000000, -1.00000000000000] (-0.184337236075, 0.0620231427285) (0.0960033593323, 2.18818410631) 2.19028909692
[-1.00000000000000, 0.000000000000000] (3.07457749063, -4.84309303007) (4.60993371584, 22.3809726073) 22.8508079444
[0.000000000000000, -3.00000000000000] (2.08531742062, 8.58333035147) (12.9318790962, 73.5882425019) 74.7158813873
[0.000000000000000, -2.00000000000000] (-0.625399327611, 1.17293600206) (1.56406032104, 4.00117819254) 4.29601112851
[0.000000000000000, -1.00000000000000] (-1.64468007664, -1.36976452276) (1.33520803175, 5.52093492444) 5.6800970879
[0.000000000000000, 0.000000000000000] (-0.445049106949, 1.09664015201) (1.29470885961, 3.64766872995) 3.87062762283
# norms all over the place!

I would actually recommend using the higher-level 'minimize' function which calls some of the scipy routines rather than the internal scipy ones directly: it does a better job wrapping Sage-native objects. http://www.sagemath.org/doc/reference/sage/numerical/optimize.html

sage: f(x,y) = (x^2+y, y^2-x+2)
sage: sol = minimize(norm(f), [0., 0.]) # add disp=0 to get it to be quiet
Optimization terminated successfully.
         Current function value: 1.288361
         Iterations: 6
         Function evaluations: 9
         Gradient evaluations: 9
sage: sol
(0.935264263401, -0.26730415417)
sage: f(*sol)
(0.607415088225, 1.13618724744)
sage: vector(f(*sol)).norm()
1.28836118796

I'm too lazy to figure out whether that's the global minimum or not, but a quick brute force scan suggests it must have gotten pretty close. (IMHO we should probably wrap OpenOpt or something; optimization is an important enough problem to be worth doing well.)

Note that this ... (more)

edit flag offensive delete link more
0

answered 2011-01-17 18:59:08 +0100

Shashank gravatar image

Try this. does this help?

from scipy import optimize
def f(z):
    x=z[0]
    y=z[1]
    ans = (x^2+y,y^2-x+2)
    return(ans)
sol=optimize.broyden2(f,[0,0])
edit flag offensive delete link more
0

answered 2011-01-18 08:01:53 +0100

stanislav gravatar image

updated 2011-01-18 08:03:14 +0100

Hello,

@ chicago: This variant working well but it is a little to long for bigger systems. Thank you of coarse.

@ DSM: it's really funny. I have read you answer this morning and I didn't known how to use minimize with a system of functions. But I hag a numeric lecture today where we have done this, so now I know, how to use minimize method with a system.

The variant with lambda function does not work for me. I get wrong solutions compared with the minimize method.

I think I will use minimize as often as I can ;) Thanks for quick replies.strong text

edit flag offensive delete link more

Comments

Some advice: you should probably spend some time reading up on how the various optimization methods work before trusting any optimization routine too much. If you don't know what they're doing you may not know all the ways in which they can fail to give you the answers you might be expecting, or what functions they're definitely going to give you the wrong answers on. In this particular case you can find the right answer by setting the derivative of the norm equal to zero and solving, but that's sometimes not practical.

DSM gravatar imageDSM ( 2011-01-18 08:16:56 +0100 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2011-01-17 17:49:03 +0100

Seen: 2,820 times

Last updated: Apr 07 '15