ASKSAGE: Sage Q&A Forum - Individual question feedhttp://ask.sagemath.org/questions/Q&A Forum for SageenCopyright Sage, 2010. Some rights reserved under creative commons license.Thu, 24 Oct 2013 00:00:44 -0500How quickly minimizing $M*x-v$ (numerically) ?http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/Let $v$ in $R^m$ and let $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers.
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal.
One way is to compute the projection $w$ of $v$ on the image of $M$.
For so, we can compute the projection $p$ on the image of $M$, as follows :
MTGS=M.transpose().gram_schmidt()[0] # it's orthogonalization, not orthonormalization
l=MTGS.rank()
U=[]
for i in range(l):
v=MTGS[i]
u=v/(v.norm())
L=list(u)
U.append(L)
N=matrix(m,l,U)
p=N.transpose()*N
Then:
w=p*v
x=M.solve_right(w)
This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes $p$ and $w$, while I just need $x$.
> Is there another method, less expensive in time, for computing $x$ ?
**Remark** : I'm ok with numerical methods.
Mon, 21 Oct 2013 13:03:37 -0500http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/Answer by ppurka for <p>Let $v$ in $R^m$ and let $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers. <br/>
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal. </p>
<p>One way is to compute the projection $w$ of $v$ on the image of $M$. <br/>
For so, we can compute the projection $p$ on the image of $M$, as follows : </p>
<pre><code>MTGS=M.transpose().gram_schmidt()[0] # it's orthogonalization, not orthonormalization
l=MTGS.rank()
U=[]
for i in range(l):
v=MTGS[i]
u=v/(v.norm())
L=list(u)
U.append(L)
N=matrix(m,l,U)
p=N.transpose()*N
</code></pre>
<p>Then: </p>
<pre><code>w=p*v
x=M.solve_right(w)
</code></pre>
<p>This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes $p$ and $w$, while I just need $x$. </p>
<blockquote>
<p>Is there another method, less expensive in time, for computing $x$ ? </p>
</blockquote>
<p><strong>Remark</strong> : I'm ok with numerical methods.</p>
http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?answer=15588#post-id-15588If I am understanding your question correctly - if you put the additional constraint of getting $x$ with the minimum norm, then you are asking for the Moore-Penrose pseudoinverse. In Sage you can explicitly call the pseudoinverse function of numpy
sage: MN = M.numpy()
sage: import numpy
sage: x = matrix(numpy.linalg.pinv(MN))*v
*Edit*: Even the command
sage: x = M \ v
works for a nonsquare matrix M. But I am not sure what this is actually doing for a nonsquare matrix. The documentation does not seem to explain. The solution obtained is not the same as obtained from the pseudoinverse.Tue, 22 Oct 2013 06:06:36 -0500http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?answer=15588#post-id-15588Comment by Sébastien Palcoux for <p>If I am understanding your question correctly - if you put the additional constraint of getting $x$ with the minimum norm, then you are asking for the Moore-Penrose pseudoinverse. In Sage you can explicitly call the pseudoinverse function of numpy</p>
<pre><code>sage: MN = M.numpy()
sage: import numpy
sage: x = matrix(numpy.linalg.pinv(MN))*v
</code></pre>
<p><em>Edit</em>: Even the command</p>
<pre><code>sage: x = M \ v
</code></pre>
<p>works for a nonsquare matrix M. But I am not sure what this is actually doing for a nonsquare matrix. The documentation does not seem to explain. The solution obtained is not the same as obtained from the pseudoinverse.</p>
http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?comment=16889#post-id-16889Thank you very much ppurka ! Do you know if we can compute x directly, without computing the pseudo-inverse before ?Tue, 22 Oct 2013 09:26:26 -0500http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?comment=16889#post-id-16889Comment by ppurka for <p>If I am understanding your question correctly - if you put the additional constraint of getting $x$ with the minimum norm, then you are asking for the Moore-Penrose pseudoinverse. In Sage you can explicitly call the pseudoinverse function of numpy</p>
<pre><code>sage: MN = M.numpy()
sage: import numpy
sage: x = matrix(numpy.linalg.pinv(MN))*v
</code></pre>
<p><em>Edit</em>: Even the command</p>
<pre><code>sage: x = M \ v
</code></pre>
<p>works for a nonsquare matrix M. But I am not sure what this is actually doing for a nonsquare matrix. The documentation does not seem to explain. The solution obtained is not the same as obtained from the pseudoinverse.</p>
http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?comment=16887#post-id-16887Well, that's what the Sage command `M \ v` does - it does not compute the inverse or pseudoinverse. Except, I am not sure exactly what it is computing for nonsquare matrices. Maybe you can put up a question in the sage-support mailing list.Tue, 22 Oct 2013 15:27:10 -0500http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?comment=16887#post-id-16887Comment by Sébastien Palcoux for <p>If I am understanding your question correctly - if you put the additional constraint of getting $x$ with the minimum norm, then you are asking for the Moore-Penrose pseudoinverse. In Sage you can explicitly call the pseudoinverse function of numpy</p>
<pre><code>sage: MN = M.numpy()
sage: import numpy
sage: x = matrix(numpy.linalg.pinv(MN))*v
</code></pre>
<p><em>Edit</em>: Even the command</p>
<pre><code>sage: x = M \ v
</code></pre>
<p>works for a nonsquare matrix M. But I am not sure what this is actually doing for a nonsquare matrix. The documentation does not seem to explain. The solution obtained is not the same as obtained from the pseudoinverse.</p>
http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?comment=16869#post-id-16869Thank you ppurka. I see a difference between "x = matrix(numpy.linalg.pinv(MN))*v" and "x = M \ v" : if v is not in the image of M then the first proposes a pseudo-solution and the second gives "ValueError: matrix equation has no solutions".Thu, 24 Oct 2013 00:00:44 -0500http://ask.sagemath.org/question/10643/how-quickly-minimizing-mx-v-numerically/?comment=16869#post-id-16869