1 | initial version |
Yes, SR matrices come in handy for that type of calculations. This:
# data
Qf = 1000000*matrix([[0,0],[0,1]]);
Xf = matrix([[1],[1675]]);
# matrix with symbolic coefficients
X = matrix([[var('x1')], [var('x2')]]);
# quadratic function
f = ((X-Xf).transpose()*Qf*(X-Xf));
# see result
Qf, Xf, X, f
produces
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\left(\begin{array}{rr} 0 & 0 \\ 0 & 1000000 \end{array}\right), \left(\begin{array}{r} 1 \\ 1675 \end{array}\right), \left(\begin{array}{r} x_{1} \\ x_{2} \end{array}\right), \left(\begin{array}{r} 1000000 \, {\left(x_{2} - 1675\right)}^{2} \end{array}\right)\right). $$
To evaluate $f$, do f(x1=1,x2=1)
.
More generally, to define your $X$ it can be useful to do something like:
# create a coefficients matrix of m rows and n columns
m = 4; n = 2;
xij = [[var('x'+str(1+i)+str(1+j)) for j in range(n)] for i in range(m)]
X = matrix(SR, xij)
X
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rr} x_{11} & x_{12} \\ x_{21} & x_{22} \\ x_{31} & x_{32} \\ x_{41} & x_{42} \end{array}\right). $$
The quadratic function $f$ defined above is a $1\times 1$ matrix (convince yourself, for instance by reading to output of type(f)
). To take the gradient this is one possible way:
# passing from a 1x1 matrix to a scalar
f = f[0, 0]
grad_f = f.gradient([x1, x2])
# see result
grad_f
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(0, 2000000 x_{2} - 3350000000\right). $$
2 | No.2 Revision |
Yes, SR matrices come in handy for that type of calculations. This:calculations.
# data
Qf = 1000000*matrix([[0,0],[0,1]]);
Xf = matrix([[1],[1675]]);
# matrix with symbolic coefficients
X = matrix([[var('x1')], [var('x2')]]);
# quadratic function
f = ((X-Xf).transpose()*Qf*(X-Xf));
# see result
Qf, Xf, X, f
produces
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\left(\begin{array}{rr} 0 & 0 \\ 0 & 1000000 \end{array}\right), \left(\begin{array}{r} 1 \\ 1675 \end{array}\right), \left(\begin{array}{r} x_{1} \\ x_{2} \end{array}\right), \left(\begin{array}{r} 1000000 \, {\left(x_{2} - 1675\right)}^{2} \end{array}\right)\right). $$
To evaluate $f$, do f(x1=1,x2=1)
.
More generally, to define your $X$ it can be useful to do something like:
# create a coefficients matrix of m rows and n columns
m = 4; n = 2;
xij = [[var('x'+str(1+i)+str(1+j)) for j in range(n)] for i in range(m)]
X = matrix(SR, xij)
X
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rr} x_{11} & x_{12} \\ x_{21} & x_{22} \\ x_{31} & x_{32} \\ x_{41} & x_{42} \end{array}\right). $$
The quadratic function $f$ defined above is a $1\times 1$ matrix (convince yourself, for instance by reading to output of type(f)
). To take the gradient this is one possible way:
# passing from a 1x1 matrix to a scalar
f = f[0, 0]
grad_f = f.gradient([x1, x2])
# see result
grad_f
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(0, 2000000 x_{2} - 3350000000\right). $$
3 | No.3 Revision |
Yes, SR matrices come in handy for that type of calculations.
# data
Qf = 1000000*matrix([[0,0],[0,1]]);
Xf = matrix([[1],[1675]]);
# matrix with symbolic coefficients
X = matrix([[var('x1')], [var('x2')]]);
# quadratic function
f = ((X-Xf).transpose()*Qf*(X-Xf));
# see result
Qf, Xf, X, f
produces
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\left(\begin{array}{rr} 0 & 0 \\ 0 & 1000000 \end{array}\right), \left(\begin{array}{r} 1 \\ 1675 \end{array}\right), \left(\begin{array}{r} x_{1} \\ x_{2} \end{array}\right), \left(\begin{array}{r} 1000000 \, {\left(x_{2} - 1675\right)}^{2} \end{array}\right)\right). $$
To evaluate $f$, do f(x1=1,x2=1)
.
More generally, to define your $X$ it can be useful to do something like:
# create a coefficients coefficient matrix of m rows and n columns
m = 4; n = 2;
xij = [[var('x'+str(1+i)+str(1+j)) for j in range(n)] for i in range(m)]
X = matrix(SR, xij)
X
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rr} x_{11} & x_{12} \\ x_{21} & x_{22} \\ x_{31} & x_{32} \\ x_{41} & x_{42} \end{array}\right). $$
The quadratic function $f$ defined above is a $1\times 1$ matrix (convince yourself, for instance by reading to output of type(f)
). To take the gradient this is one possible way:
# passing from a 1x1 matrix to a scalar
f = f[0, 0]
grad_f = f.gradient([x1, x2])
# see result
grad_f
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(0, 2000000 x_{2} - 3350000000\right). $$
4 | No.4 Revision |
Yes, SR matrices come in handy for that type of calculations.
# data
Qf = 1000000*matrix([[0,0],[0,1]]);
Xf = matrix([[1],[1675]]);
# matrix with symbolic coefficients
X = matrix([[var('x1')], [var('x2')]]);
# quadratic function
f = ((X-Xf).transpose()*Qf*(X-Xf));
# see result
Qf, Xf, X, f
produces
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\left(\begin{array}{rr} 0 & 0 \\ 0 & 1000000 \end{array}\right), \left(\begin{array}{r} 1 \\ 1675 \end{array}\right), \left(\begin{array}{r} x_{1} \\ x_{2} \end{array}\right), \left(\begin{array}{r} 1000000 \, {\left(x_{2} - 1675\right)}^{2} \end{array}\right)\right). $$
To evaluate $f$, do f(x1=1,x2=1)
.
More generally, to define your $X$ it can be useful to do something like:
# create a coefficient matrix of m rows and n columns
m = 4; n = 2;
xij = [[var('x'+str(1+i)+str(1+j)) for j in range(n)] for i in range(m)]
X = matrix(SR, xij)
X
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rr} x_{11} & x_{12} \\ x_{21} & x_{22} \\ x_{31} & x_{32} \\ x_{41} & x_{42} \end{array}\right). $$
The quadratic function $f$ defined above is a $1\times 1$ matrix (convince yourself, for instance by reading to the output of type(f)
). To take the gradient this is one possible way:
# passing from a 1x1 matrix to a scalar
f = f[0, 0]
grad_f = f.gradient([x1, x2])
# see result
grad_f
$$ \newcommand{\Bold}[1]{\mathbf{#1}}\left(0, 2000000 x_{2} - 3350000000\right). $$