Ask Your Question
1

Non-linear optimization with no derivatives

asked 2022-01-29 03:50:27 +0200

johng23 gravatar image

I'm trying to optimize a convex (or concave?) function which is a sum of max functions. Is there a function that does this?

For example, I'd like to minimize

max(r_0 + s_0, 0) + max(r_0 + s_1, r_1 + s_0, 0) + max(r_0 + s_2, r_1 + s_1, r_2 + s_0, 0) + max(r_1 + s_2, r_2 + s_1, 0) + max(r_2 + s_2, 0)

subject to some linear constraints in terms of r_i and s_i.

I saw minimize_constrained, but it required the the minimizing function to have a defined derivative, which is not built in.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2022-01-29 11:39:21 +0200

Max Alekseyev gravatar image

updated 2022-01-29 11:45:52 +0200

Introduce a new variable for each $\max$, say $m_i$, and constraints that it is $\geq$ each argument of $\max$. Then in the objective function replace each $\max$ with the corresponding $m_i$. For your example we get:

$$\begin{cases} m_1\geq r_0 + s_0\\ m_1\geq 0\\ m_2 \geq r_0 + s_1\\ m_2\geq r_1 + s_0\\ m_2\geq 0\\ \dots\\ m_1 + m_2 + m_3 + m_4 + m_5\ \rightarrow\ \min \end{cases} $$ Since all constraints here are linear, such a problem can be solved via MILP - https://doc.sagemath.org/html/en/refe...

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2022-01-29 02:12:19 +0200

Seen: 103 times

Last updated: Jan 29 '22