Ask Your Question
1

How to find the similarity matrix between two similar matrices

asked 2017-10-31 21:39:44 +0200

anonymous user

Anonymous

updated 2017-10-31 21:40:56 +0200

Suppose I have two matrices A=(1 0 ; 1 0) and B=(0 0; 1 1) where A,B are two by two matrices and suppose we know A and B are two similar matrices. Then there exist an invertible matrix P such that A=PBP^(-1). Now, how one can calculate the matrix P in sage.

edit retag flag offensive close merge delete

Comments

See also https://trac.sagemath.org/ticket/22397 (that still needs work to be included in sage)

FrédéricC gravatar imageFrédéricC ( 2017-11-01 21:57:02 +0200 )edit

2 Answers

Sort by » oldest newest most voted
2

answered 2017-11-01 10:04:21 +0200

B r u n o gravatar image

updated 2017-11-01 10:05:18 +0200

sage: A = matrix(QQ,[[1,0],[1,0]])
sage: B = matrix(QQ,[[0,0],[1,1]])
sage: b, P = A.is_similar(B, transformation=True)
sage: b
True
sage: P.inverse() * B * P == A
True

For more, you can use A.is_similar? to get:

Docstring:     
   Return "True" if "self" and "other" are similar, i.e. related by a
   change-of-basis matrix.

   INPUT:

   * "other" -- a matrix, which should be square, and of the same
     size as "self".

   * "transformation" -- default: "False" - if "True", the output
     may include the change-of-basis matrix (also known as the
     similarity transformation). See below for an exact description.

   [...]
edit flag offensive delete link more

Comments

Thank you very much. will you please see this problem: A = matrix(QQbar,6,[2,-1,-I,0,0,0,-1,2,-1,0,0,0,I,-1,3,-1,0,0,0,0,-1,2,-1,0,0,0,0,-1,2,-1,0,0,0,0,-1,1]) B = matrix(QQbar,6,[2,-1,0,-I,0,0,-1,2,-1,0,0,0,0,-1,2,-1,0,0,I,0,-1,3,-1,0,0,0,0,-1,2,-1,0,0,0,0,-1,1]) I have shown that these two matrices are similar but I have not been able to find explicitly that invertible matrix P.please see

rewi gravatar imagerewi ( 2017-11-01 13:34:27 +0200 )edit

Right, with your example, the computation hangs. My investigation is as follows:

  • To compute the transformation matrix $P$, the function computes the transformation matrices $P_A$ and $P_B$ associated to the Jordan forms of $A$ and $B$.
  • Then, it returns $P_B\cdot P_A^{-1}$.
  • $P_A^{-1}$ is computed using the echelon form of $P_A|I$ ($P_A$ augmented with the identity matrix).
  • Given the echelon form, it checks whether the lower right coefficient of the left part is $1$ (to check whether the matrix is indeed invertible).
  • In your case, this coefficient is 1.000000000000000? + 0.?e-28*I. To check whether is is $1$, it checks whether the coefficient minus $1$ is $0$.
  • This coefficient minus $1$ is 0.?e-18 + 0.?e-28*I.
B r u n o gravatar imageB r u n o ( 2017-11-02 12:00:59 +0200 )edit
  • Now the "coefficient minus $1$", 0.?e-18 + 0.?e-28*I is stored as both an approximate value and an exact representation.
  • The approximate value is given as a ComplexIntervalFieldElement: it is a rectangle in which the element is guaranteed to be. In your case, it is a tiny rectangle whose center is almost on zero. SageMath first tries to prove the element nonzero by refining the rectangle and excluding zero from it. Of course here it does not work since the element is zero!
    • Now it uses the exact representation to prove the element is exactly zero.
B r u n o gravatar imageB r u n o ( 2017-11-02 12:21:16 +0200 )edit
  • The problem is that the element in your case is only known as a *binary expression", not by its minimal polynomial for instance. That is, SageMath knows it as formal expression built from other elements of QQbar using additions, mutliplications, inversions and the basic building blocks are algebraic numbers defined by some minimal polynomial. To prove the element to be zero, one needs to prove that this huge expression is $0$, exactly. This is very time consuming.
    • Actually what happens is that in the computations of (mainly) the echelon form, when two algebraic numbers had to be multiplied, the result was simply defined as "the product of the operands", the same for additions and so on. The idea is that such a way, the echelon form is computed quite fast.
B r u n o gravatar imageB r u n o ( 2017-11-02 12:32:33 +0200 )edit

Now finally, you can do all the computations by hand, simply not checking that $P_A$ is indeed invertible:

sage: A = matrix(QQbar,6,[2,-1,-I,0,0,0,-1,2,-1,0,0,0,I,-1,3,-1,0,0,0,0,-1,2,-1,0,0,0,0,-1,2,-1,0,0,0,0,-1,1])
sage: B = matrix(QQbar,6,[2,-1,0,-I,0,0,-1,2,-1,0,0,0,0,-1,2,-1,0,0,I,0,-1,3,-1,0,0,0,0,-1,2,-1,0,0,0,0,-1,1])
sage: _, PA = A.jordan_form(transformation=True)
sage: _, PB = B.jordan_form(transformation=True)
sage: PAaug = PA.augment(identity_matrix(6))
sage: PAinv = PAaug.echelon_form()[:, 6:]
sage: P = PB * PAinv
B r u n o gravatar imageB r u n o ( 2017-11-02 12:37:26 +0200 )edit
0

answered 2017-11-02 11:19:37 +0200

dan_fulea gravatar image

updated 2017-11-02 20:45:18 +0200

The answer of B r u n o is a quick answer for the stated question, as it was stated.

The new question should be stated in a different place, not as an obscure partial comment, since it comes with a complication in a special problem inside a "similar" problem.

Here is a solution "to the comment". It would not fit as a comment, so it is an answer.

First of all, let us write the two matrices, so that a human can compile them with bare eyes:

j = I
F = QQbar
A = matrix( F, 6, [ 2,-1,-j,0,0,0,
                    -1,2,-1,0,0,0,
                    j,-1,3,-1,0,0,
                    0,0,-1,2,-1,0,
                    0,0,0,-1,2,-1,
                    0,0,0,0,-1,1  ] )

B = matrix( F, 6, [ 2,-1,0,-j,0,0,
                    -1,2,-1,0,0,0,
                    0,-1,2,-1,0,0,
                    j,0,-1,3,-1,0,
                    0,0,0,-1,2,-1,
                    0,0,0,0,-1,1  ] )

They are of course similar, as we can quickly test:

sage: A.is_similar(B)
True

But the plain request

sage: A.is_similar(B, transformation=1)

takes too long to be exectued. Why?! We will see soon.

Some useful information is as follows:

sage: A.is_diagonalizable()
True
sage: B.is_diagonalizable()
True
sage: A.minpoly() == A.charpoly()
True
sage: B.minpoly() == B.charpoly()
True

So a solution to the problem can be found by getting the Jordan normal forms, and the transformations. Let us see.

We compute $T_A,T_B$ so that we have: $$ T_A\; J = A\cdot T_A\ .$$ $$ T_B\; J = B\cdot T_B\ .$$

Code:

JA, TA = A.jordan_form( transformation=1 )
JB, TB = B.jordan_form( transformation=1 )

print "Is JA == JB?", JA==JB

This gives:

Is JA == JB? True

So let us define:

J = JA

Let us test the equalities:

print "TA*J == A*TA is", TA*J == A*TA
print "TB*J == B*TB is", TB*J == B*TB

which gives:

TA*J == A*TA is True
TB*J == B*TB is True

Then from $J=T_A^{-1}AT_A=T_B^{-1}BT_B$ we get immediately $$ A = (T_AT_B^{-1})\cdot B\cdot (T_BT_A^{-1})\ ,$$ so it seems that we only need to ask for one or both matrices

TB * TA.inverse()
TA * TB.inverse()

and we are done. Well, first of all, inverses of $6\times 6$ matrices with entries in QQbar are hard to compute. We simply type

TA.inverse()

and start to wait. So here was the problem. At this stage we can involve either mathematics of a better (programatical) framework to solve the problem.

(1) Using mathematics*

In order to avoid taking inverses, we use the following trick. We take the Jordan normal form for BB, the transpose of B. Let $B'$ be mathematically the transposed matrix to $B$ The equality $$ T_{B'}\; J = B'\; T_{B'} $$ becomes transposed, and we get $$ T_{B'}\; B = J\; T_{B'} \ .$$ Here, $J=J_B=J_{B'}$ is the same Jordan normal form. Then from $$ J=T_A^{-1}AT_A=T'_{B'}B(T'_{B'})^{-1} $$ we see that the matrix $ S = T_A\cdot T'_{B'} $ is our friend. Code for this path:

BB = B.transpose()

JA , TA  = A .jordan_form( transformation=1 )
JBB, TBB = BB.jordan_form( transformation=1 )

print "Is JA == JBB?", JA == JBB    # yes...

# J = JA
S = TA * TBB.transpose()
print "A*S - S*B is the following matrix:"
print A*S - S*B

And we get:

Is JA == JBB? True
A*S - S*B is the following matrix:
[0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-11 + 0.?e-11*I 0.?e-12 + 0.?e-12*I]
[0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-11 + 0.?e-11*I 0.?e-12 + 0.?e-12*I]
[0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-11 + 0.?e-11*I 0.?e-12 + 0.?e-12*I]
[0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I]
[0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I]
[0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-12 + 0.?e-12*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I 0.?e-11 + 0.?e-11*I]

But we have to wait again too long for

print "Is A*S == S*B?", A*S == S*B    # looooooooong tie...

So even recognizing if some numbers are zero...

(2) Using linear algebra

Note that a base change matrix to get the Jordan normal form is determined only up to multiplication with some diagonal matrix. (The diagonal matrices form the commutator of $J$, they commute with $J$, of course and $J$ has different eigenvalues.) This is so since its columns, eigenvectors for the corresponding eigenvalues in the Jordan matrix, are determined only up to a (non-zero) multiplicative constant.

Now let us imagine $X$ is a matrix of shape $6\times 6$, each entry is an unknown, We want to solve the linear system $AX=XB$. It is natural to get a vector space of dimension $6$ of solutions. It is exactly what the following code does.

The basis of this vector space is built out of six matrices, the code prints them. Each linear combination of them delivers a matrix interchanging $A,B$. The code:

F.<J> = QuadraticField( -1 )
A = matrix( F, 6, [ 2,-1,-J,0,0,0,
                    -1,2,-1,0,0,0,
                    J,-1,3,-1,0,0,
                    0,0,-1,2,-1,0,
                    0,0,0,-1,2,-1,
                    0,0,0,0,-1,1  ] )

B = matrix( F, 6, [ 2,-1,0,-J,0,0,
                    -1,2,-1,0,0,0,
                    0,-1,2,-1,0,0,
                    J,0,-1,3,-1,0,
                    0,0,0,-1,2,-1,
                    0,0,0,0,-1,1  ] )

a = A.change_ring( QQbar )
b = B.change_ring( QQbar )

R = range(6)
xvars = var( ','.join( [ 'x%s%s' % (j,k) for j in R for k in R ] ) )
X = matrix( 6, 6, xvars )
eqs = [ ( A*X )[j,k] == (X*B)[j,k] for j in R for k in R ]
sol = solve( eqs, xvars, solution_dict=1 )[0]

S = matrix( 6, 6, [ sol[ xvar ] for xvar in xvars ] )
newvars = list( set( [ v for j in R for k in R for v in S[j,k].variables() ] ) )
# extract from S these new variables

RT = range(len(newvars)) # this is range(6)
T = [ matrix( 6, 6, [ QQbar( S[j,k].coefficient( newvars[t] ) ) for j in R for k in R ] )
      for t in RT ]

for t in RT:
    U = T[t]
    print ( "T[%s] is the following matrix:\n%s\ndet( T[%s] ) = %s\nThe relation A*T[%s] == T[%s]*B is %s\n"
            % ( t, U, t, U.det(), t, t, a*U == U*b ) )

And the results are:

T[0] is the following matrix:
[   I + 1        0    I + 1        1        0        I]
[      -I    I + 1        1        0    I + 1        0]
[       0   -I + 1        0    I + 2        0        1]
[  -I + 1     -2*I    I + 1        I        2        1]
[-2*I + 1    I + 1       -I        2    I + 1        2]
[       0   -I + 1        2        1        2    I + 2]
det( T[0] ) = 16*I + 112
The relation A*T[0] == T[0]*B is True

T[1] is the following matrix:
[     0      0      0      0      I  I + 1]
[     0      0      0      0      1  I + 1]
[     0      0      0      1      1      1]
[    -I      0      1      1      1      1]
[    -I -I + 1      1      1      1      1]
[-I + 1 -I + 1 -I + 1      1      1      1]
det( T[1] ) = 4*I - 4
The relation A*T[1] == T[1]*B is True

T[2] is the following matrix:
[    -1      0      I      0 -I + 1      0]
[     0  I - 1      0      1      0     -I]
[     0      0      0     -1      0     -I]
[     0 -I + 1     -2     -1      0     -I]
[     1     -2     -I      0 -I - 1      0]
[    -2      0      0     -I      0     -1]
det( T[2] ) = -32*I - 32
The relation A*T[2] == T[2]*B is True

T[3] is the following matrix:
[    I     1     I     0     0     0]
[    1     I     1     0     0     0]
[    0     1     0 I + 1     0     0]
[   -I     1     I     0 I + 1     0]
[    1    -1     1 I + 1     0 I + 1]
[    0     1     0     0 I + 1 I + 1]
det( T[3] ) = 8*I + 8
The relation A*T[3] == T[3]*B is True

T[4] is the following matrix:
[       2        0        1        0       -I        1]
[       0        2       -I        0        1       -I]
[       0       -I        1   -I + 1        0       -I]
[      -1   -I - 1        2        1 -2*I + 1        0]
[-2*I - 1    I + 1   -I - 1 -2*I + 1        1   -I + 1]
[       0 -2*I - 1       -I        0   -I + 1   -I + 2]
det( T[4] ) = 212*I + 280
The relation A*T[4] == T[4]*B is True

T[5] is the following matrix:
[     1      0      2      0     -1     -I]
[    -I      2      0      0     -I     -1]
[     1     -I      0 -I + 1      0     -1]
[     0 -I - 1      1      I     -I      0]
[-I - 1  I + 1     -1     -I      I -I + 1]
[     I     -1      0      0 -I + 1      1]
det( T[5] ) = -40*I + 36
The relation A*T[5] == T[5]*B is True

On the free market i would maybe be able to sell only T[1], nobody will buy on other intertwiner, too complicated...

edit flag offensive delete link more

Comments

I have two questions:

  1. It seems that your trick of taking the transpose is general: Could we modify is_similar to compute the transformation matrix using your trick? It would be presumably more efficient!
  2. Do you think that taking inverse of $6\times 6$ matrices over QQbarhas to be costly, or is it simply due to SageMath's current implementation?
B r u n o gravatar imageB r u n o ( 2017-11-02 14:22:47 +0200 )edit
  1. Yes, the trick should work in the generic case. (There should be also a point in between when there is a match the diagonal of the J-matrices. The trick works only if J is diagonal, else the standard form blocks are switches to the other side of the diagonal...)

  2. There should be relatively simple - in principle - to invert a matrix, Gauss algorithm against the initial identity matrix, there are $N$ elimination steps, where we need the $N$ pivot elements. So we need a quick check if an element is zero. I think this is the drawback when working "uncontrolled" in QQbar. An algorithm to decide immediately if an element is zero or not is needed, and this is all is needed. All other computations seem to work well. I also tried to work in the Galois closure. No chance...

dan_fulea gravatar imagedan_fulea ( 2017-11-02 20:54:36 +0200 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2017-10-31 21:39:44 +0200

Seen: 4,319 times

Last updated: Nov 02 '17