Processing math: 100%

First time here? Check out the FAQ!

Ask Your Question

Revision history [back]

Projection on the image of a matrix

Let v in R^m and let J be a matrix from R^n to R^m, with m>n big numbers.
I want to compute the projection w of v on the image of J.

For computing the projection on the image of J, we can do as follows : JTGS=J.transpose().gram_schmidt()[0] p=JTGS.transpose()*JTGS

Then: w=p*v

But this method is very expensive in time, because it computes p, while I just need w.
Is there another method, less expensive in time, for computing w ?

click to hide/show revision 2
No.2 Revision

Projection on the image of a matrix

Let v in R^m and let J be a matrix from R^n to R^m, with m>n big numbers.
I want to compute the projection w of v on the image of J.

For computing the projection on the image of J, we can do as follows : JTGS=J.transpose().gram_schmidt()[0] JTGS=J.transpose().gram_schmidt()[0]
p=JTGS.transpose()*JTGS

Then: w=p*v

But this method is very expensive in time, because it computes p, while I just need w.
Is there another method, less expensive in time, for computing w ?

click to hide/show revision 3
No.3 Revision

updated 11 years ago

Shashank gravatar image

Projection on the image of a matrix

Let v v in R^m Rm and let J J be a matrix from R^n Rn to R^m, Rm, with m>n m>n big numbers.
I want to compute the projection w of v on the image of J.J.

For computing the projection on the image of J, we can do as follows :

JTGS=J.transpose().gram_schmidt()[0] 
p=JTGS.transpose()*JTGS

p=JTGS.transpose()*JTGS

Then: w=p*v w=pv

But this method is very expensive in time, because it computes p, while I just need w.
Is there another method, less expensive in time, for computing w ?

click to hide/show revision 4
I improve the question

Projection on the image of a matrixHow quickly minimizing M*x-v ?

Let v in Rm and let J M be a matrix from Rn to Rm, with m>n big numbers.
I want to compute a vector x in Rn such that the norm of Mxv is minimal.

One way is to compute the projection w w of v v on the image of J.

M. For computing so, we can compute the projection p on the image of J, we can do M, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then: w=pv

w=p*v  
x=M.solve_right(w)

But This vector x minimizes the norm of Mxv, but this method is very expensive in time, because it computes p, p and w, while I just need w.
x.

Is there another method, less expensive in time, for computing w ?x ?

Remark : I'm ok with numerical methods.

click to hide/show revision 5
No.5 Revision

How quickly minimizing M*x-v Mxv ?

Let v in Rm and let M be a matrix from Rn to Rm, with m>n big numbers.
I want to compute a vector x in Rn such that the norm of Mxv is minimal.

One way is to compute the projection w of v on the image of M. For so, we can compute the projection p on the image of M, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then:

w=p*v  
x=M.solve_right(w)

This vector x minimizes the norm of Mxv, but this method is very expensive in time, because it computes p and w, while I just need x.

Is there another method, less expensive in time, for computing x ?

Remark : I'm ok with numerical methods.

click to hide/show revision 6
No.6 Revision

How quickly minimizing Mxv ?

Let v in Rm and let M be a matrix from Rn to Rm, with m>n big numbers.
I want to compute a vector x in Rn such that the norm of Mxv is minimal.

One way is to compute the projection w of v on the image of M. M.
For so, we can compute the projection p on the image of M, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then:

w=p*v  
x=M.solve_right(w)

This vector x minimizes the norm of Mxv, but this method is very expensive in time, because it computes p and w, while I just need x.

Is there another method, less expensive in time, for computing x ?

Remark : I'm ok with numerical methods.

click to hide/show revision 7
No.7 Revision

How quickly minimizing Mxv (numerically) ?

Let v in Rm and let M be a matrix from Rn to Rm, with m>n big numbers.
I want to compute a vector x in Rn such that the norm of Mxv is minimal.

One way is to compute the projection w of v on the image of M.
For so, we can compute the projection p on the image of M, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS

Then:

w=p*v  
x=M.solve_right(w)

This vector x minimizes the norm of Mxv, but this method is very expensive in time, because it computes p and w, while I just need x.

Is there another method, less expensive in time, for computing x ?

Remark : I'm ok with numerical methods.

click to hide/show revision 8
I improve because gram_schmidt is orthogonalisation, not orthonormalisation

How quickly minimizing Mxv (numerically) ?

Let v in Rm and let M be a matrix from Rn to Rm, with m>n big numbers.
I want to compute a vector x in Rn such that the norm of Mxv is minimal.

One way is to compute the projection w of v on the image of M.
For so, we can compute the projection p on the image of M, as follows :

JTGS=J.transpose().gram_schmidt()[0]  
p=JTGS.transpose()*JTGS
MTGS=J.transpose().gram_schmidt()[0]  # it's orthogonalization, not orthonormalization
l=MTGS.rank()
U=[]
for i in range(l):
   v=M[i]
   u=v/(v.norm())
   L=list(u)
   U.append(L)
N=matrix(m,l,U)  
p=N.transpose()*N

Then:

w=p*v  
x=M.solve_right(w)

This vector x minimizes the norm of Mxv, but this method is very expensive in time, because it computes p and w, while I just need x.

Is there another method, less expensive in time, for computing x ?

Remark : I'm ok with numerical methods.

click to hide/show revision 9
I fix a mistake

How quickly minimizing Mxv (numerically) ?

Let v in Rm and let M be a matrix from Rn to Rm, with m>n big numbers.
I want to compute a vector x in Rn such that the norm of Mxv is minimal.

One way is to compute the projection w of v on the image of M.
For so, we can compute the projection p on the image of M, as follows :

MTGS=J.transpose().gram_schmidt()[0] MTGS=M.transpose().gram_schmidt()[0]  # it's orthogonalization, not orthonormalization
l=MTGS.rank()
U=[]
for i in range(l):
   v=M[i]
v=MTGS[i]
   u=v/(v.norm())
   L=list(u)
   U.append(L)
N=matrix(m,l,U)  
p=N.transpose()*N

Then:

w=p*v  
x=M.solve_right(w)

This vector x minimizes the norm of Mxv, but this method is very expensive in time, because it computes p and w, while I just need x.

Is there another method, less expensive in time, for computing x ?

Remark : I'm ok with numerical methods.