Ask Your Question
4

Using Sage Symbolic Functions in Scipy fsolve

asked 2011-04-10 21:52:57 +0100

I need to find the roots of a system of multivariate non-linear (algebraic) functions and Sage's solve function is running indefinitely. (Where "indefinitely" means "more than 30 mins".) I only require numerical solutions so I was hoping to use Scipy's fsolve function instead. However, the functions that I'm generating are Sage symbolic functions, which have been mightly convenient to work with, and I'm having trouble getting them into a format that fsolve will understand. Does anyone have any suggestions?

The documentation for Scipy's fsolve can be found here. It looks like a Python function is sufficient.

As a toy example, I've tried the following:

sage: from scipy.optimize import fsolve
sage: var('x,y')
sage: f = x^2 + y^2
sage: def ff(v):
        return f(x=v[0],y=v[0])
sage: fsolve(ff,[1,1])

It seems like the naive approach but I'm receiving the error TypeError: no canonical coercion from <type 'numpy.int64'> to Symbolic Ring. Perhaps this is an issue with Sage properly dealing with Numpy/Scipy data types?

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
6

answered 2011-04-11 04:47:24 +0100

Kelvin Li gravatar image

updated 2011-04-11 14:47:23 +0100

The original scenario involves two variables and only one equation. Because ff accepts two variables but does not return a list (or tuple, or array, ...) of length 2, the "shape mismatch" error is thrown. Here is a working example (with two equations and minimal modifications):

import scipy.optimize

var('x,y')
f = x^2 + y^2 - 1
g = x - y^2

def ff(v):
    x, y = map(float, v)
    return [f(x=x, y=y), g(x=x, y=y)]

print scipy.optimize.fsolve(ff, [1, 1])

Here is a more compact version using lambda functions and the trick described link:here

import scipy.optimize

var('x,y')
f(x,y) = (x^2 + y^2 - 1, x - y^2)

print scipy.optimize.fsolve(lambda v: f(*map(float, v)), (1, 1))

Since you said you have a large number of variables and equations, here is a slightly more "scale-friendly" approach (but somewhat kludgy):

import scipy.optimize

# Create symbolic variables "x_0" and "x_1".
# This is a pretty messy kludge
x = tuple(
    SR.var(
        "".join(['x_', str(t)])
    )
    for t in range(2)
)

# Represent system of equations using "x" as defined above.
# I use the intermediate "exprs" variable to allow breaking up the tuple
# over multiple lines.
exprs = (
    x[0]^2 + x[1]^2 - 1,
    x[0] - x[1]^2
)

f(*x) = exprs

# Guess values
guess = (1, 1)

print scipy.optimize.fsolve(lambda v: f(*map(float, v)), guess)

In all cases, the result should look like: [ 0.61803399 0.78615138]

edit flag offensive delete link more

Comments

I realize now that my toy example doesn't accurately reflect my real situation where I have as many nonlinear equations as I have unknowns. I'll try using your suggestion.

cswiercz gravatar imagecswiercz ( 2011-04-11 13:00:14 +0100 )edit

My other question is whether or not this is the most efficient way to approach the problem or if there is (hopefully) a one-liner that will Python function wrap my symbolic functions. I suppose I could use lambda notation but at this point I suppose I'm just talking to myself. :) Thanks for your response!

cswiercz gravatar imagecswiercz ( 2011-04-11 13:36:32 +0100 )edit

Thank you very much for the tips!

cswiercz gravatar imagecswiercz ( 2011-04-11 14:50:48 +0100 )edit
1

answered 2011-04-13 01:51:48 +0100

DSM gravatar image

The "minimize" wrapper has better Sage integration (e.g. symbolic gradients). The equivalent of the above example would be something like

sage: var("x, y")
(x, y)
sage: f = x^2 + y^2 - 1
sage: g = x - y^2
sage: eqs = f, g
sage: minimize(norm(vector(eqs)), [1.0, 1.0])
Divide-by-zero encountered: rhok assumed large
Warning: Desired error not necessarily achieveddue to precision loss
         Current function value: 0.000000
         Iterations: 57
         Function evaluations: 124
         Gradient evaluations: 123
(0.61803398875, 0.786151377757)

So if you wanted a one-liner, something along those lines might do it.

edit flag offensive delete link more

Comments

Thanks for the one-liner suggestion! Since I'm looking for roots and my functions aren't necessarily non-negative I might have to take absolute values for this method to work. I'll try it out in my real problem.

cswiercz gravatar imagecswiercz ( 2011-04-13 10:10:05 +0100 )edit

That's why I put in the "norm" call.

DSM gravatar imageDSM ( 2011-04-13 11:21:57 +0100 )edit

Of course. D'oh.

cswiercz gravatar imagecswiercz ( 2011-04-13 13:17:04 +0100 )edit
0

answered 2011-04-11 03:36:18 +0100

Felix Lawrence gravatar image

This is one of the problems you're having:

sage: ff(scipy.array([1,1]))
TypeError: no canonical coercion from <type 'numpy.int64'> to Symbolic Ring

Which can be remedied as such:

sage: ff(scipy.array([1.,1.]))
2

But this merely exposes another problem:

sage: scipy.optimize.fsolve(ff, scipy.array([1.,1.]))
TypeError: There is a mismatch between the input and output shape of ff.

And I don't know how to solve that one.

edit flag offensive delete link more

Comments

I think Kelvin Li appropriately identified the issue with that last TypeError. Exploring some more of the Scipy documentation shows no examples for "underdetermined" systems; systems which may still have local minima / maxima / solutions.

cswiercz gravatar imagecswiercz ( 2011-04-11 13:33:52 +0100 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2011-04-10 21:52:57 +0100

Seen: 3,838 times

Last updated: Apr 13 '11