Ask Your Question

Revision history [back]

Projection on the image of a matrix

Let v in R^m and let J be a matrix from R^n to R^m, with m>n big numbers.
I want to compute the projection w of v on the image of J.

For computing the projection on the image of J, we can do as follows : JTGS=J.transpose().gram_schmidt()[0] p=JTGS.transpose()*JTGS

Then: w=p*v

But this method is very expensive in time, because it computes p, while I just need w.
Is there another method, less expensive in time, for computing w ?

Projection on the image of a matrix

Let v in R^m and let J be a matrix from R^n to R^m, with m>n big numbers.
I want to compute the projection w of v on the image of J.

For computing the projection on the image of J, we can do as follows : JTGS=J.transpose().gram_schmidt()[0] JTGS=J.transpose().gram_schmidt()[0]
p=JTGS.transpose()*JTGS

Then: w=p*v

But this method is very expensive in time, because it computes p, while I just need w.
Is there another method, less expensive in time, for computing w ?

click to hide/show revision 3
No.3 Revision

Projection on the image of a matrix

Let v $v$ in R^m $R^m$ and let J $J$ be a matrix from R^n $R^n$ to R^m, $R^m$, with m>n $m>n$ big numbers.
I want to compute the projection w of v on the image of J.$J$.

For computing the projection on the image of J, we can do as follows :

JTGS=J.transpose().gram_schmidt()[0] 
p=JTGS.transpose()*JTGS

p=JTGS.transpose()*JTGS

Then: w=p*v $w=p*v$

But this method is very expensive in time, because it computes p, while I just need w.
Is there another method, less expensive in time, for computing w ?

click to hide/show revision 4
I improve the question

Projection on the image of a matrixHow quickly minimizing M*x-v ?

Let $v$ in $R^m$ and let $J$ $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers.
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal.

One way is to compute the projection w $w$ of v $v$ on the image of $J$.

$M$. For computing so, we can compute the projection $p$ on the image of J, we can do $M$, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then: $w=p*v$

w=p*v  
x=M.solve_right(w)

But This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes p, $p$ and $w$, while I just need w.
$x$.

Is there another method, less expensive in time, for computing w ?$x$ ?

Remark : I'm ok with numerical methods.

How quickly minimizing M*x-v $M*x-v$ ?

Let $v$ in $R^m$ and let $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers.
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal.

One way is to compute the projection $w$ of $v$ on the image of $M$. For so, we can compute the projection $p$ on the image of $M$, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then:

w=p*v  
x=M.solve_right(w)

This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes $p$ and $w$, while I just need $x$.

Is there another method, less expensive in time, for computing $x$ ?

Remark : I'm ok with numerical methods.

How quickly minimizing $M*x-v$ ?

Let $v$ in $R^m$ and let $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers.
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal.

One way is to compute the projection $w$ of $v$ on the image of $M$. $M$.
For so, we can compute the projection $p$ on the image of $M$, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then:

w=p*v  
x=M.solve_right(w)

This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes $p$ and $w$, while I just need $x$.

Is there another method, less expensive in time, for computing $x$ ?

Remark : I'm ok with numerical methods.

How quickly minimizing $M*x-v$ (numerically) ?

Let $v$ in $R^m$ and let $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers.
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal.

One way is to compute the projection $w$ of $v$ on the image of $M$.
For so, we can compute the projection $p$ on the image of $M$, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then:

w=p*v  
x=M.solve_right(w)

This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes $p$ and $w$, while I just need $x$.

Is there another method, less expensive in time, for computing $x$ ?

Remark : I'm ok with numerical methods.

click to hide/show revision 8
I improve because gram_schmidt is orthogonalisation, not orthonormalisation

How quickly minimizing $M*x-v$ (numerically) ?

Let $v$ in $R^m$ and let $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers.
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal.

One way is to compute the projection $w$ of $v$ on the image of $M$.
For so, we can compute the projection $p$ on the image of $M$, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS
MTGS=J.transpose().gram_schmidt()[0]  # it's orthogonalization, not orthonormalization
l=MTGS.rank()
U=[]
for i in range(l):
   v=M[i]
   u=v/(v.norm())
   L=list(u)
   U.append(L)
N=matrix(m,l,U)  
p=N.transpose()*N

Then:

w=p*v  
x=M.solve_right(w)

This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes $p$ and $w$, while I just need $x$.

Is there another method, less expensive in time, for computing $x$ ?

Remark : I'm ok with numerical methods.

How quickly minimizing $M*x-v$ (numerically) ?

Let $v$ in $R^m$ and let $M$ be a matrix from $R^n$ to $R^m$, with $m>n$ big numbers.
I want to compute a vector $x$ in $R^n$ such that the norm of $M*x-v$ is minimal.

One way is to compute the projection $w$ of $v$ on the image of $M$.
For so, we can compute the projection $p$ on the image of $M$, as follows :

MTGS=J.transpose().gram_schmidt()[0] MTGS=M.transpose().gram_schmidt()[0]  # it's orthogonalization, not orthonormalization
l=MTGS.rank()
U=[]
for i in range(l):
   v=M[i]
v=MTGS[i]
   u=v/(v.norm())
   L=list(u)
   U.append(L)
N=matrix(m,l,U)  
p=N.transpose()*N

Then:

w=p*v  
x=M.solve_right(w)

This vector $x$ minimizes the norm of $M*x-v$, but this method is very expensive in time, because it computes $p$ and $w$, while I just need $x$.

Is there another method, less expensive in time, for computing $x$ ?

Remark : I'm ok with numerical methods.