Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Defining constraint eqations for minimize_constrained

I'm trying to think of a way to do constrained optimization where both the objective function that I want to minimise and my constraint function are calculated within the same overarching function. Lets say I have a function like the following:

def complicated_function(x,y,z):
    #...some lengthy and complicated actions
    return f, g

Is there a way to minimize f by changing x,y,z subject to g>=0?

Looking at the documentation for sage.numerical.optimize.minimize_constrained it looks as if I have to define all my constraint equations individually with the varibales as a tuple. I could maybe wrap my complicated_function up like this:

def funcObjective(x):
    f, g = complicated_function(x[0],x[1],x[2])
    return f
def funcConstraint(x):
    f, g = complicated_function(x[0],x[1],x[2])
    return g

But that would mean complicated_function would be called multiple times for each optimization iteration which seems highly inefficient. The problem would only get worse if there were more contraints. Any ideas how to define the constraint without reevaluating complicated_function?

On my internet wanderings I found the recently (June 6, 2011) released pyOpt 1.0 (journal article) which at first glance looks well suited to the problem. I see OpenOpt is an experimental package for sage. I'm not sure if openOpt is suitable; the pyOpt documentation is, at first glance, clearer. Any chance pyOpt could be made an optional package with sage, it's published under the GNU Lesser General Public License?