ASKSAGE: Sage Q&A Forum - Individual question feedhttp://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Fri, 09 Dec 2011 06:55:01 -0600Defining constraint eqations for minimize_constrainedhttp://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/I'm trying to think of a way to do constrained optimization where both the objective function that I want to minimise and my constraint function are calculated within the same overarching function. Lets say I have a function like the following:
def complicated_function(x,y,z):
#...some lengthy and complicated actions
return f, g
Is there a way to minimize f by changing x,y,z subject to g>=0?
Looking at the documentation for [`sage.numerical.optimize.minimize_constrained`](http://www.sagemath.org/doc/reference/sage/numerical/optimize.html) it looks as if I have to define all my constraint equations individually with the varibales as a tuple. I could maybe wrap my `complicated_function` up like this:
def funcObjective(x):
f, g = complicated_function(x[0],x[1],x[2])
return f
def funcConstraint(x):
f, g = complicated_function(x[0],x[1],x[2])
return g
But that would mean `complicated_function` would be called multiple times for each optimization iteration which seems highly inefficient. The problem would only get worse if there were more contraints. Any ideas how to define the constraint without reevaluating `complicated_function`?
On my internet wanderings I found the recently (June 6, 2011) released [pyOpt 1.0](http://www.pyopt.org/index.html) ([journal article](http://mdolab.engin.umich.edu/sites/default/files/pyOpt.pdf)) which at first glance looks well suited to the problem. I see [OpenOpt](http://www.openOpt.org) is an experimental package for sage. I'm not sure if openOpt is suitable; the pyOpt documentation is, at first glance, clearer. Any chance pyOpt could be made an optional package with sage, it's published under the GNU Lesser General Public License?Wed, 07 Dec 2011 11:50:50 -0600http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/Answer by rtrwalker for <p>I'm trying to think of a way to do constrained optimization where both the objective function that I want to minimise and my constraint function are calculated within the same overarching function. Lets say I have a function like the following:</p>
<pre><code>def complicated_function(x,y,z):
#...some lengthy and complicated actions
return f, g
</code></pre>
<p>Is there a way to minimize f by changing x,y,z subject to g>=0?</p>
<p>Looking at the documentation for <a href="http://www.sagemath.org/doc/reference/sage/numerical/optimize.html"><code>sage.numerical.optimize.minimize_constrained</code></a> it looks as if I have to define all my constraint equations individually with the varibales as a tuple. I could maybe wrap my <code>complicated_function</code> up like this:</p>
<pre><code>def funcObjective(x):
f, g = complicated_function(x[0],x[1],x[2])
return f
def funcConstraint(x):
f, g = complicated_function(x[0],x[1],x[2])
return g
</code></pre>
<p>But that would mean <code>complicated_function</code> would be called multiple times for each optimization iteration which seems highly inefficient. The problem would only get worse if there were more contraints. Any ideas how to define the constraint without reevaluating <code>complicated_function</code>?</p>
<p>On my internet wanderings I found the recently (June 6, 2011) released <a href="http://www.pyopt.org/index.html">pyOpt 1.0</a> (<a href="http://mdolab.engin.umich.edu/sites/default/files/pyOpt.pdf">journal article</a>) which at first glance looks well suited to the problem. I see <a href="http://www.openOpt.org">OpenOpt</a> is an experimental package for sage. I'm not sure if openOpt is suitable; the pyOpt documentation is, at first glance, clearer. Any chance pyOpt could be made an optional package with sage, it's published under the GNU Lesser General Public License?</p>
http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?answer=12986#post-id-12986Thanks @DSM. I've tried to generalize your solution with the following:
def get3(fn, constraint_nums, x0, iobj=0):
"""
Constrained mimimization of a function that is output, along with constraint equations, from function fn.
Arguments:
fn - function that returns a tuple containing objective function and constraint equations
constraint_nums - tuple or list containing index positions of constraint equations in fn output
x0 - initial guess
iobj - index position of object function in fn output. default = 0
"""
a = minimize_constrained(lambda x,j=iobj: fn(x)[j],
[lambda x,j=i: fn(x)[j] for i in constraint_nums],
x0)
return a
running `get3(func_wrap,range(1,5),[2,3])` results in the correct (45.0, 6.25).
I had some trouble with the list comprehension of the lambda functions for the constraints. If I used `[lambda x: fn(x)[i] for i in constraint_nums]` then the `[i]` wasn't hardcoded in and each of my lambda equations had a parameter `i` in them; the list comphehension looped through the `i`'s meaning all the lambda functions were the same with `i` equal to the final i such that only my last constraint equation was used. `[lambda x,j=i: fn(x)[j] for i in constraint_nums]` seems to work.
Two further questions:
1.Is there a way to determine the number of values returned by a function without actually evaluating the function? If there was I could further generalise `get3` by giving `constraint_nums` a default value.
2. What actually gets cached when I use FunctionCached? Is everything in the function stored or just the output values? When should I be using fn.clear_cache()?
Fri, 09 Dec 2011 06:55:01 -0600http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?answer=12986#post-id-12986Answer by DSM for <p>I'm trying to think of a way to do constrained optimization where both the objective function that I want to minimise and my constraint function are calculated within the same overarching function. Lets say I have a function like the following:</p>
<pre><code>def complicated_function(x,y,z):
#...some lengthy and complicated actions
return f, g
</code></pre>
<p>Is there a way to minimize f by changing x,y,z subject to g>=0?</p>
<p>Looking at the documentation for <a href="http://www.sagemath.org/doc/reference/sage/numerical/optimize.html"><code>sage.numerical.optimize.minimize_constrained</code></a> it looks as if I have to define all my constraint equations individually with the varibales as a tuple. I could maybe wrap my <code>complicated_function</code> up like this:</p>
<pre><code>def funcObjective(x):
f, g = complicated_function(x[0],x[1],x[2])
return f
def funcConstraint(x):
f, g = complicated_function(x[0],x[1],x[2])
return g
</code></pre>
<p>But that would mean <code>complicated_function</code> would be called multiple times for each optimization iteration which seems highly inefficient. The problem would only get worse if there were more contraints. Any ideas how to define the constraint without reevaluating <code>complicated_function</code>?</p>
<p>On my internet wanderings I found the recently (June 6, 2011) released <a href="http://www.pyopt.org/index.html">pyOpt 1.0</a> (<a href="http://mdolab.engin.umich.edu/sites/default/files/pyOpt.pdf">journal article</a>) which at first glance looks well suited to the problem. I see <a href="http://www.openOpt.org">OpenOpt</a> is an experimental package for sage. I'm not sure if openOpt is suitable; the pyOpt documentation is, at first glance, clearer. Any chance pyOpt could be made an optional package with sage, it's published under the GNU Lesser General Public License?</p>
http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?answer=12981#post-id-12981FWIW, I've used OpenOpt for a few things and had some success. I haven't had as much luck with the scipy optimizers, but YMMV.
One easy way to avoid the cost of multiple evaluations is to take advantage of the cached_function decorator:
def complicated(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
sleep(0.10)
return f,g
@cached_function
def complicated_cached(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
sleep(0.10)
return f,g
# or complicated_cached = cached_function(complicated)
def get(fn):
a = minimize_constrained(lambda x: fn(*x)[0],
lambda x: fn(*x)[1],
[1,1,1])
return a
which can produce a significant speedup. (I put in the sleeps to make the one-call time longer-- originally I did multiple runs, but the caching means that I wasn't measuring what I wanted to.)
sage: time a = get(complicated)
Time: CPU 0.06 s, Wall: 7.67 s
sage: time b = get(complicated_cached)
Time: CPU 0.03 s, Wall: 3.83 s
UPDATE:
When numpy gets involved caching the function is a little trickier, because we need an immutable key. We can get around the problem by coercing to a tuple:
def func(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 = p[1]-5
c_3 = -50*p[0]-24*p[1]+2400
c_4 = -30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
func_cached = CachedFunction(func)
func_wrap = lambda x: func_cached(tuple(x))
def get2(fn):
a = minimize_constrained(lambda x: fn(x)[0],
[lambda x: fn(x)[1],lambda x: fn(x)[2],lambda x: fn(x)[3],lambda x: fn(x)[4]],
[2,3])
which produces:
sage: time get2(func)
Time: CPU 0.10 s, Wall: 0.10 s
sage: time get2(func_wrap)
Time: CPU 0.02 s, Wall: 0.02 s
Wed, 07 Dec 2011 13:37:23 -0600http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?answer=12981#post-id-12981Answer by rtrwalker for <p>I'm trying to think of a way to do constrained optimization where both the objective function that I want to minimise and my constraint function are calculated within the same overarching function. Lets say I have a function like the following:</p>
<pre><code>def complicated_function(x,y,z):
#...some lengthy and complicated actions
return f, g
</code></pre>
<p>Is there a way to minimize f by changing x,y,z subject to g>=0?</p>
<p>Looking at the documentation for <a href="http://www.sagemath.org/doc/reference/sage/numerical/optimize.html"><code>sage.numerical.optimize.minimize_constrained</code></a> it looks as if I have to define all my constraint equations individually with the varibales as a tuple. I could maybe wrap my <code>complicated_function</code> up like this:</p>
<pre><code>def funcObjective(x):
f, g = complicated_function(x[0],x[1],x[2])
return f
def funcConstraint(x):
f, g = complicated_function(x[0],x[1],x[2])
return g
</code></pre>
<p>But that would mean <code>complicated_function</code> would be called multiple times for each optimization iteration which seems highly inefficient. The problem would only get worse if there were more contraints. Any ideas how to define the constraint without reevaluating <code>complicated_function</code>?</p>
<p>On my internet wanderings I found the recently (June 6, 2011) released <a href="http://www.pyopt.org/index.html">pyOpt 1.0</a> (<a href="http://mdolab.engin.umich.edu/sites/default/files/pyOpt.pdf">journal article</a>) which at first glance looks well suited to the problem. I see <a href="http://www.openOpt.org">OpenOpt</a> is an experimental package for sage. I'm not sure if openOpt is suitable; the pyOpt documentation is, at first glance, clearer. Any chance pyOpt could be made an optional package with sage, it's published under the GNU Lesser General Public License?</p>
http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?answer=12983#post-id-12983@DSM I can successfully run your example but if I adapt your solution to the example in the minimize_constrained documentation then I get an error for the cached version. For the non-cached version I get the correct result (45.0, 6.25).
def func(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 = p[1]-5
c_3 = -50*p[0]-24*p[1]+2400
c_4 = -30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
@cached_function
def func_cached(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 =p[1]-5
c_3 =-50*p[0]-24*p[1]+2400
c_4 =-30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
def get2(fn):
a = minimize_constrained(lambda x: fn(x)[0],
[lambda x: fn(x)[1],lambda x: fn(x)[2],lambda x: fn(x)[3],lambda x: fn(x)[4]],
[2,3])
return a
If I run `get2(func)` the result is (45.0, 6.25). If I run `get2(func_cached)` I get errors. I did try rewritting func using x, y arguments (and fn(*x) in get2) but I got the same results.Thu, 08 Dec 2011 08:02:16 -0600http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?answer=12983#post-id-12983Comment by DSM for <p><a href="/users/142/dsm/">@DSM</a> I can successfully run your example but if I adapt your solution to the example in the minimize_constrained documentation then I get an error for the cached version. For the non-cached version I get the correct result (45.0, 6.25).</p>
<pre><code>def func(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 = p[1]-5
c_3 = -50*p[0]-24*p[1]+2400
c_4 = -30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
@cached_function
def func_cached(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 =p[1]-5
c_3 =-50*p[0]-24*p[1]+2400
c_4 =-30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
def get2(fn):
a = minimize_constrained(lambda x: fn(x)[0],
[lambda x: fn(x)[1],lambda x: fn(x)[2],lambda x: fn(x)[3],lambda x: fn(x)[4]],
[2,3])
return a
</code></pre>
<p>If I run <code>get2(func)</code> the result is (45.0, 6.25). If I run <code>get2(func_cached)</code> I get errors. I did try rewritting func using x, y arguments (and fn(*x) in get2) but I got the same results.</p>
http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?comment=20732#post-id-20732@rtwalker: "errors" is a little uninformative :^) but I can see what's going on. See my update.Thu, 08 Dec 2011 08:24:19 -0600http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?comment=20732#post-id-20732Comment by rtrwalker for <p><a href="/users/142/dsm/">@DSM</a> I can successfully run your example but if I adapt your solution to the example in the minimize_constrained documentation then I get an error for the cached version. For the non-cached version I get the correct result (45.0, 6.25).</p>
<pre><code>def func(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 = p[1]-5
c_3 = -50*p[0]-24*p[1]+2400
c_4 = -30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
@cached_function
def func_cached(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 =p[1]-5
c_3 =-50*p[0]-24*p[1]+2400
c_4 =-30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
def get2(fn):
a = minimize_constrained(lambda x: fn(x)[0],
[lambda x: fn(x)[1],lambda x: fn(x)[2],lambda x: fn(x)[3],lambda x: fn(x)[4]],
[2,3])
return a
</code></pre>
<p>If I run <code>get2(func)</code> the result is (45.0, 6.25). If I run <code>get2(func_cached)</code> I get errors. I did try rewritting func using x, y arguments (and fn(*x) in get2) but I got the same results.</p>
http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?comment=20733#post-id-20733On a different note, within get2 I did try to use a list construction to specify my constraints: [lambda x: fn(x)[i] for i in range(1,5)]. This gave the answer (739.437607818, -608.579643471) for the non-cached version and still gave errors for the cached version; not sure how/why that approach stuffed up.Thu, 08 Dec 2011 08:07:44 -0600http://ask.sagemath.org/question/8537/defining-constraint-eqations-for-minimize_constrained/?comment=20733#post-id-20733