Ask Your Question

Revision history [back]

The particular case is readily solved with Sage:

sage: stationary_points = lambda f  : solve([gi for gi in f.gradient()], f.variables())
sage: f(x,y) = -(x * log(x) + y * log(y))
sage: stationary_points(f)
[[x == e^(-1), y == e^(-1)]]

More generally, $\nabla f(x_1,\ldots, x_n) = 0 $ is a necessary and sufficienty condition for optimality provided that $f$ is twice continuously differentiable and convex (see e.g. BV Ch. 9.1 page 457). In this setting, we can use stationary_points as above, but in general the solve function will fail to find explicit solutions. Indeed, from that book:

"In a few special cases, we can find an optimal solution by analytically solving the optimality equation, but usually the problem must be solved by an iterative algorithm. "

The particular case is readily solved with Sage:

sage: stationary_points = lambda f  : solve([gi for gi in f.gradient()], f.variables())
sage: f(x,y) = -(x * log(x) + y * log(y))
sage: stationary_points(f)
[[x == e^(-1), y == e^(-1)]]

More generally, $\nabla f(x_1,\ldots, x_n) = 0 $ is a necessary and sufficienty condition for optimality provided that $f$ is twice continuously differentiable and convex (see e.g. BV Ch. 9.1 page 457). In this setting, we can use stationary_points as above, but in general the solve function will fail to find explicit solutions. Indeed, from that book:

"In a few special cases, we can find an optimal solution by analytically solving the optimality equation, but usually the problem must be solved by an iterative algorithm. "algorithm."