1 | initial version |

FWIW, I've used OpenOpt for a few things and had some success. I haven't had as much luck with the scipy optimizers, but YMMV.

One easy way to avoid the cost of multiple evaluations is to take advantage of the cached_function decorator:

```
def complicated(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
return f,g
@cached_function
def complicated_cached(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
return f,g
# or complicated_cached = cached_function(complicated)
def get(fn):
a = minimize_constrained(lambda x: fn(*x)[0],
lambda x: fn(*x)[1],
[1,1,1])
return a
```

which produces a significant speedup:

```
sage: %timeit a = get(complicated)
25 loops, best of 3: 14.4 ms per loop
sage: %timeit b = get(complicated_cached)
625 loops, best of 3: 1.51 ms per loop
```

2 | No.2 Revision |

FWIW, I've used OpenOpt for a few things and had some success. I haven't had as much luck with the scipy optimizers, but YMMV.

One easy way to avoid the cost of multiple evaluations is to take advantage of the cached_function decorator:

```
def complicated(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
sleep(0.10)
return f,g
@cached_function
def complicated_cached(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
sleep(0.10)
return f,g
# or complicated_cached = cached_function(complicated)
def get(fn):
a = minimize_constrained(lambda x: fn(*x)[0],
lambda x: fn(*x)[1],
[1,1,1])
return a
```

which ~~produces ~~can produce a significant ~~speedup:~~speedup. (I put in the sleeps to make the one-call time longer-- originally I did multiple runs, but the caching means that I wasn't measuring what I wanted to.)

`sage: `~~%timeit ~~time a = get(complicated)
~~25 loops, best of 3: 14.4 ms per loop
~~Time: CPU 0.06 s, Wall: 7.67 s
sage: ~~%timeit ~~time b = get(complicated_cached)
~~625 loops, best of 3: 1.51 ms per loop
~~Time: CPU 0.03 s, Wall: 3.83 s

3 | No.3 Revision |

FWIW, I've used OpenOpt for a few things and had some success. I haven't had as much luck with the scipy optimizers, but YMMV.

One easy way to avoid the cost of multiple evaluations is to take advantage of the cached_function decorator:

```
def complicated(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
sleep(0.10)
return f,g
@cached_function
def complicated_cached(x,y,z):
f = (x*2-y*z)**2
g = x+y-9*z+4
sleep(0.10)
return f,g
# or complicated_cached = cached_function(complicated)
def get(fn):
a = minimize_constrained(lambda x: fn(*x)[0],
lambda x: fn(*x)[1],
[1,1,1])
return a
```

which can produce a significant speedup. (I put in the sleeps to make the one-call time longer-- originally I did multiple runs, but the caching means that I wasn't measuring what I wanted to.)

```
sage: time a = get(complicated)
Time: CPU 0.06 s, Wall: 7.67 s
sage: time b = get(complicated_cached)
Time: CPU 0.03 s, Wall: 3.83 s
```

UPDATE:

When numpy gets involved caching the function is a little trickier, because we need an immutable key. We can get around the problem by coercing to a tuple:

```
def func(p):
f = -p[0]-p[1]+50
c_1 = p[0]-45
c_2 = p[1]-5
c_3 = -50*p[0]-24*p[1]+2400
c_4 = -30*p[0]-33*p[1]+2100
return f, c_1, c_2, c_3, c_4
func_cached = CachedFunction(func)
func_wrap = lambda x: func_cached(tuple(x))
def get2(fn):
a = minimize_constrained(lambda x: fn(x)[0],
[lambda x: fn(x)[1],lambda x: fn(x)[2],lambda x: fn(x)[3],lambda x: fn(x)[4]],
[2,3])
```

which produces:

```
sage: time get2(func)
Time: CPU 0.10 s, Wall: 0.10 s
sage: time get2(func_wrap)
Time: CPU 0.02 s, Wall: 0.02 s
```

Copyright Sage, 2010. Some rights reserved under creative commons license. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.