Lecture 05

[[lecture-data]]

2024-09-06

Readings

1. Eigenvalues and Similarity

Last time, we talked about diagonalization A=SDS1 having the same eigenvalues as D which are easy to find. Not only are the diagonal entries of D the eigenvalues, but the columns of S are the eigenvectors!

In particular, the change of basis from A to D is through eigenvectors. In the perspective of the eigenvectors S, A is just a diagonal operator (it breaks individual coordinates into piece and scales along those dimensions).

We can call diagonalization an eigenvalue-eigenvector decomposition.

Suppose we have a system of differential equations like y=Ay which might be tricky to solve if each derivative is in terms of the other functions. but if A is diagonalizable, it may be easier to solve this system by changing to z=Dz which is much easier to deal with.

Si

Let AMn(C) have eigenvalues λ1,λ2,..,λn. For i=1,,n, define

Si:=multisubsets U of i eigenvaluesλUλ
Example

S2=λ1λ2++λ1λn++λn1λn
S3=λ1λ2λ3+λ1λ2λ4++λn2λn1λn
Take all (ni) ways to multiply the eigenvalues and then add them.

Ei and Principal Submatrix

Ei=all (ni) i×i principle submatricesMdetM

Submatrices: destroy some columns and rows. We call it a principle submatrix if we get rid of the same rows as columns (we delete each ith row and ith column).

E1=a11++ann=Tr(A)

Proposition

If AMn , then

PA=λnS1λn1+S2λn2±Snλ0=λnE1λn1+E2λn2±Enλ0

It's easy to see why the first equality holds. If we factor the polynomial into PA=(λλ1)(λλ2)(λλn) and expand out, we get exactly Si as the coefficients. (When select i of the terms to get a λni term we get exactly Si)

Exercise

Prove the second equality via induction with the Laplace expansion

Consequences:
detA=λσ(A)λ
For a diagonalizable matrix, this is easy to see. But this is useful to know for matrices that are not diagonalizable. Another perspective: when there is 0 eigenvalue, the determinant collapses to zero and the matrix is singular. When there is no 0 eigenvalue, then the matrix is invertible which we saw last time.
(see determinant)

TrA=i=1nλi
This implies that the trace of similar matrices are the same! So this means that the trace is a property of the transformation - not necessarily obvious.
(see trace)

Coming up:

Multiplying partitioned matrices
Suppose A is partitioned, not necessarily regularly. Suppose B is partitioned conformally (in the same way for the rows as the columns of A)

The Aijth block is mi×nj, a submatrix
Bij is ni×pj

AB=C where Cij is mi×pj and Cij=k=1sAikBkj
This is suspiciously like multiplying regular matrices with single entries! And each matrix in the multiplication is of size mi×pj. So why does this work?

Say I want to take a single row and a single column and compute their inner product like doing it for a "normal" matrix with single entries. We are doing the same thing but chunk by chunk.

Permutation Matrices

Permutation Matrix

A permutation matrix is a square matrix P such that there is one 1 in every row, one 1 in every column, and the rest are zeroes.

(this reorders/rearranges the rows of A if we multiply PA and reorders the columns of B if we multiply BPT according to the pattern of the rows or columns of P respectively)

Note that PPT=IPT=P1 and also PDPT reorders diagonals.

P is also a similarity transformation.

(see permutation matrix)

Suppose C is a matrix and C is block traingular. (Blocks along diagonal are triagular in the same way) Then σ(C)=i=1kσ(Cii)

det(λIC)=det(IλCii) which gives us the same characteristic polynomial.

Next time: A and BT have the same dimensions. AB and BA have the same nonzero eigenvalues.