1 | initial version |
One week after, we stille don't know if this is homework or not. Well... one week after, it might be stale. So here are a couple of answers to a question interestng beyond the OP potential homework.
The problem data can be set up in sage as follows :
# Problem data
# Arguments of the function to optimize and nullity constraints to satisfy.
Args = var("w, x, y, z")
# Function tooptimize (here a Symbolic Expression)
F = (4*x + 4*y + 9*z - 2*w)
# Constraints (here, symbolic expressions to nullify)
G = (2*x + 2*y + z + w)
H = (x^2 + y^2 + z - 4)
# Problem setup
# Tuple of constraints :
# we no longer depend of the particular shape of the problem
C = (G, H)
The general solution states that if (the$n$-dimensional vector) $\mathbf{x^}$ maximizes (the scalar) $\phi(\mathbf{x})$ under the ($m$-vector) *equality constraint(s) $\mathbf{\psi(x)}=\mathbf{0}$, then there exist an ($m$-dimensional) vector $\mathbf{\lambda}$ (the Lagrange multipliers) such as all the components of the differential of the (scalar) function (callad Lagrangian) $\phi(\mathbf{x})+ \mathbf{\lambda}\cdot\mathbf{\psi(c)}$ are all null ; in other words, the maximization of a scalar functin of $n$ variables under $m$ constraints is replaced by the maximization of a scalar function of $n+m$ variables.
This translates in Sage as :
# Solution
# (Vector) function of constraints
psi = vector(C).function(*Args)
# Vector of Lagrange multipliers
Lambda = vector([var("lambda_{}".format(u)) for u in range(len(C))])
# Function to optimize
phi = F.function(*Args)
# Lagrangian arguments
ArgsL = Args + tuple(Lambda)
# Lagrangian function
L = (phi + Lambda.dot_product(psi)).function(*ArgsL)
# Its differential (vector)
dL = L.diff()(*ArgsL)
# Solution(s) nullifying this differential :
Sol = solve(list(dL), ArgsL, solution_dict=True)
# Extremal value
Max = F.subs(Sol)
which gives :
sage: Sol
[{w: -628/121, x: 4/11, y: 4/11, z: 452/121, lambda_0: 2, lambda_1: -11}]
sage: Max
516/11
Nota bene : the Lagrange multipliers solution(s) can be maxima or minima, but may also be saddle points ; this should be checked.
The idea is to use the $m$ constraints to express $m$ variables as functins of the other $n-m$ variables. (Here, we can express $w$ and $z$ as finctions of $x$ and $y$). The problem is then to fonfd the (unconstrained) maximum of a finction of $n-m$ variables, wich is done by solving the components of its differential for tjhose variables :
# Eliminate two variables from the two constraints. Here, we choose w and z,
# since one constraint uses x^2 and y^2, opening the door to multiple solutions.
S1=solve([G, H],[w, z], solution_dict=True)
# Substitute this in F
F1 = F.subs(S1)
# Optimize F1, i. e. find the point(s) where both derivatives are null :
Sol1 = solve([F1.diff(u) for u in (x, y)],[x, y], solution_dict=True)
# Substitute that in the [G, H] system, and solve for (numerical) w, z :
Sol2 = solve([u.subs(Sol1[0]) for u in (G, H)],[w, z], solution_dict=True)
# Cleanup : One dictionary solution :
HSSol = copy(Sol1[0])
HSSol.update(Sol2[0])
# Get the (numerical) extremum :
HSMax = F.subs(HSSol)
which gives :
sage: HSSol
{x: 4/11, y: 4/11, w: -628/121, z: 452/121}
sage: HSMax
516/11
Again, the nature (maximum, minimum or saddle-point) of the extrema thus obtained must be checked.
One can note that Sage implements numerical methods of minimization of a real function subject to (more general inequality) constraints, accessible via the minimize_constrained
function. Other methods implemented in Scipy
(which is part of Sage
) can also be used.
2 | No.2 Revision |
One week after, we stille still don't know if this is homework or not. not.
Well... one week after, it might be stale. So here are a couple of of
answers to a question interestng interesting beyond the OP OP's potential homework.
The problem data can be set up in sage Sage as follows :follows:
# Problem data
# Arguments of the function to optimize and nullity constraints to satisfy.
satisfy
Args = var("w, x, y, z")
# Function tooptimize to optimize (here a Symbolic Expression)
symbolic expression)
F = (4*x + 4*y + 9*z - 2*w)
# Constraints (here, symbolic expressions to nullify)
G = (2*x + 2*y + z + w)
H = (x^2 + y^2 + z - 4)
# Problem setup
# Tuple setup: tuple of constraints :
# we no longer -- expressed in
# a way that does not depend of on the particular shape of the problem
C = (G, H)
The general general solution states that if (the$n$-dimensional (the $n$-dimensional vector) $\mathbf{x^}$ $\mathbf{x^{*}}$
maximizes (the scalar) $\phi(\mathbf{x})$ under the ($m$-vector) *equality constraint(s) $\mathbf{\psi(x)}=\mathbf{0}$, equality constraint
$\mathbf{\psi(x)} = \mathbf{0}$, then there exist an ($m$-dimensional) vector vector
$\mathbf{\lambda}$ (the Lagrange multipliers) such as all the components of the the
differential of the (scalar) function (callad Lagrangian) Lagrangian)
$\phi(\mathbf{x})+ \mathbf{\lambda}\cdot\mathbf{\psi(c)}$ \mathbf{\lambda} \cdot \mathbf{\psi(c)}$ are all null ; zero;
in other words, the maximization of a scalar functin of $n$ variables variables
under $m$ constraints is replaced by the maximization maximization
of a scalar function of $n+m$ $n + m$ variables.
This translates in Sage as :as:
# Solution
# (Vector) function of constraints
psi = vector(C).function(*Args)
# Vector of Lagrange multipliers
Lambda = vector([var("lambda_{}".format(u)) for u in range(len(C))])
# Function to optimize
phi = F.function(*Args)
# Lagrangian arguments
ArgsL = Args + tuple(Lambda)
# Lagrangian function
L = (phi + Lambda.dot_product(psi)).function(*ArgsL)
# Its differential (vector)
dL = L.diff()(*ArgsL)
# Solution(s) nullifying this differential :
Sol = solve(list(dL), ArgsL, solution_dict=True)
# Extremal value
Max = F.subs(Sol)
which gives :gives:
sage: Sol
[{w: -628/121, x: 4/11, y: 4/11, z: 452/121, lambda_0: 2, lambda_1: -11}]
sage: Max
516/11
Nota bene :bene: the Lagrange multipliers solution(s) solutions can be maxima or minima, minima,
but may also be saddle points ; points; this should be checked.
The idea is to use the $m$ constraints to express $m$ variables as functins of functions of
the other $n-m$ $n - m$ variables. (Here, we can express $w$ and $z$ as finctions functions
of $x$ and $y$). The problem is then to fonfd find the (unconstrained) maximum of a finction of $n-m$ maximum
of a function of $n - m$ variables, wich which is done by solving the components components
of its differential for tjhose variables :those variables:
# Eliminate two variables from the two constraints. Here, we choose w and z,
# since one constraint uses x^2 and y^2, opening the door to multiple solutions.
S1=solve([G, H],[w, solutions
S1 = solve([G, H], [w, z], solution_dict=True)
# Substitute this in F
F1 = F.subs(S1)
# Optimize F1, i. e. find the point(s) where both derivatives are null :
zero
Sol1 = solve([F1.diff(u) for u in (x, y)],[x, y)], [x, y], solution_dict=True)
# Substitute that in the [G, H] system, and solve for (numerical) w, z :
z
Sol2 = solve([u.subs(Sol1[0]) for u in (G, H)],[w, H)], [w, z], solution_dict=True)
# Cleanup :
# Cleanup: One dictionary solution :
solution
HSSol = copy(Sol1[0])
HSSol.update(Sol2[0])
# Get the (numerical) extremum :
extremum
HSMax = F.subs(HSSol)
which gives :gives:
sage: HSSol
{x: 4/11, y: 4/11, w: -628/121, z: 452/121}
sage: HSMax
516/11
Again, the nature (maximum, minimum or saddle-point) saddle-point)
of the extrema thus obtained must be checked.
One can note that Sage implements numerical methods of minimization minimization
of a real function subject to (more general inequality) constraints, inequality) constraints,
accessible via the minimize_constrained
function. Other methods methods
implemented in SciPy (which Scipy
is part of Sage depends on) can also be used.Sage
)
3 | No.3 Revision |
One week after, we still don't know if this is homework or not. Well... one week after, it might be stale. So here are a couple of answers to a question interesting beyond the OP's potential homework.
The problem data can be set up in Sage as follows:
# Problem data
# Arguments of the function to optimize and nullity constraints to satisfy
Args = var("w, x, y, z")
# Function to optimize (here a symbolic expression)
F = (4*x + 4*y + 9*z - 2*w)
# Constraints (here, symbolic expressions to nullify)
G = (2*x + 2*y + z + w)
H = (x^2 + y^2 + z - 4)
# Problem setup: tuple of constraints -- expressed in
# a way that does not depend on the particular problem
C = (G, H)
The general solution states that if (the $n$-dimensional vector) $\mathbf{x^{*}}$
$\mathbf{x_0}$
maximizes (the scalar) $\phi(\mathbf{x})$ under the ($m$-vector) equality constraint
$\mathbf{\psi(x)} $\boldsymbol{\psi}\mathbf{(x}) = \mathbf{0}$, then there exist an ($m$-dimensional) vector
$\mathbf{\lambda}$ (the Lagrange multipliers) such as all the components of the
differential of the (scalar) function (callad Lagrangian)
$\phi(\mathbf{x})+ $\phi(\mathbf{x_0})+ \mathbf{\lambda} \cdot \mathbf{\psi(c)}$ \mathbf{\boldsymbol{\psi}(x_0)}$ are all zero;
in other words, the maximization of a scalar functin function of $n$ variables
under $m$ constraints is replaced by the maximization
of a scalar function of $n + m$ variables.
This translates in Sage as:
# Solution
# (Vector) function of constraints
psi = vector(C).function(*Args)
# Vector of Lagrange multipliers
Lambda = vector([var("lambda_{}".format(u)) for u in range(len(C))])
# Function to optimize
phi = F.function(*Args)
# Lagrangian arguments
ArgsL = Args + tuple(Lambda)
# Lagrangian function
L = (phi + Lambda.dot_product(psi)).function(*ArgsL)
# Its differential (vector)
dL = L.diff()(*ArgsL)
# Solution(s) nullifying this differential :
Sol = solve(list(dL), ArgsL, solution_dict=True)
# Extremal value
Max = F.subs(Sol)
which gives:
sage: Sol
[{w: -628/121, x: 4/11, y: 4/11, z: 452/121, lambda_0: 2, lambda_1: -11}]
sage: Max
516/11
Nota bene: the Lagrange multipliers solutions can be maxima or minima, but may also be saddle points; this should be checked.
The idea is to use the $m$ constraints to express $m$ variables as functions of
the other $n - m$ variables. (Here, we can express $w$ and $z$ as functions
of $x$ and $y$). The problem is then to find the (unconstrained) maximum
of a function of $n - m$ variables, which is done by solving the components
of its differential for those variables:variables.
This is called parameterization.
# Eliminate two variables from the two constraints. Here, we choose w and z,
# since one constraint uses x^2 and y^2, opening the door to multiple solutions
S1 = solve([G, H], [w, z], solution_dict=True)
# Substitute this in F
F1 = F.subs(S1)
# Optimize F1, i. e. find the point(s) where both derivatives are zero
Sol1 = solve([F1.diff(u) for u in (x, y)], [x, y], solution_dict=True)
# Substitute in the [G, H] system, and solve for (numerical) w, z
Sol2 = solve([u.subs(Sol1[0]) for u in (G, H)], [w, z], solution_dict=True)
# Cleanup: One dictionary solution
HSSol = copy(Sol1[0])
HSSol.update(Sol2[0])
# Get the (numerical) extremum
HSMax = F.subs(HSSol)
which gives:
sage: HSSol
{x: 4/11, y: 4/11, w: -628/121, z: 452/121}
sage: HSMax
516/11
Again, the nature (maximum, minimum or saddle-point) of the extrema thus obtained must be checked.
One can note that Sage implements numerical methods of minimization
of a real function subject to (more general inequality) constraints,
accessible via the minimize_constrained
function. Other methods
implemented in SciPy (which Sage depends on) can also be used.