代写 scala Module 1-2: Linear Transformations

Module 1-2: Linear Transformations
Linear Decomposition Linear Control Systems (2019)
Ding Zhao
Assistant Professor College of Engineering School of Computer Science
Carnegie Mellon University
Ding Zhao (CMU)
M1-2: Linear Transformations 1 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 2 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 3 / 37

Numerical Issues using Jordan Decomposition (OTB)
What is the rank:
Now check the rank of
A + E where E =
􏰀 0 0􏰁
10−10 0 is a small numerical error
A + E =
10−10
1 ⇒ rank=1
􏰀 1 1010􏰁
􏰀1 1010 􏰁
A=0 1 ⇒rank=2.
Ding Zhao (CMU)
M1-2: Linear Transformations
4 / 37

Cont.
􏰀1 1􏰁
λ=1, q=rank(A−λI)=1, typeII-2, J= 0 1 , rank(A)=2
v1 = [1,0]T,Av2 = λv2 +v1
􏰀 0 1010 􏰁 􏰀 1 􏰁 −10 T
0 0 v2 = 0 ⇒ v2 = [0, 10 ] 􏰀1 0 􏰁 −1 􏰀10 􏰁
M= 010−10 M = 01010
−1 􏰀1 0 􏰁􏰀01􏰁􏰀10 􏰁
Ding Zhao (CMU)
=010−10 01010=01 =A M1-2: Linear Transformations
5 / 37
MJM = 0 10−10 0 1 0 1010 􏰀1 1􏰁􏰀10􏰁􏰀11010􏰁

Cont.
(1−λ)2−1=λ2−2λ=0⇒λ1 =2,λ2 =0, 􏰀 −1 1010 􏰁
typeI, det=λ1λ2 =0, rank=1 􏰀 1 􏰁
λ1 =2: K−λ1I= 10−10 −1
􏰀 1 1010􏰁
λ2 =0: K−λ2I= 10−10 1
v1 =0⇒v1 = 10−10
􏰀 1 􏰁
v2 =0⇒v2 = −10−10
K=A+E
􏰀1−λ 1010 􏰁 K−λI= 10−10 1−λ
􏰀 1 1 􏰁 −1 􏰀1−110−20 110−10 􏰁 M=,M=22,J=
􏰁
10−10 MJ =
−10−10 1 10−20 1 10−10 22
􏰀20􏰁 −1􏰀11010􏰁 2 × 10−10 0 MJM = 10−10 1
􏰀2 = A
0
Ding Zhao (CMU)
M1-2: Linear Transformations
6 / 37

Cont.
􏰇􏰈􏰀1110􏰁 MI = 10−10 −10−10 01
􏰀1 1 1 0􏰁􏰚 −10 􏰛 = 0 −2×10−10 −10−10 1 R2−10 R1→R2
􏰀1110􏰁􏰞110 􏰟 = 0 1 1 −1×1010 −2×10 R2→R2
22
􏰀101 1×1010􏰁
=22 (R−R→R) 011−1×1010 2 2 1
22
−1 􏰀1 1×1010􏰁
M=22
1 −1 ×1010
22
Ding Zhao (CMU)
M1-2: Linear Transformations
7 / 37

Limitations of Jordan Decomposition
(V ,F), U ⊆ V . A : V → V be a linear transportation with matrix A in the basis (bi)ni=1. U is A -invariant iff:
∀v ∈ U , Av ∈ U
In general J is not real.
If real, then in general it is not diagonal.
If diagonal, then in general M is not orthogonal.
Ding Zhao (CMU) M1-2: Linear Transformations 8 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 9 / 37

Symmetric Matrix
Given A real, it is a Symmetric Matrix: AT = A.
All its eigenvalues are real
A has n orthogonal eigenvectors.
⇒A=MAˆM−1.AˆisdiagonalwithrealvaluesandcolumnsofMareorthogonal.Real- Why?
First: ⟨xi, Axi⟩ = xTi Axi = xTi AT xi = (Axi)T xi = ⟨Axi, xi⟩
Second: Axi = λixi, ⇒ Axi − λixi = 0
⇒ ⟨xi, Axi − λixi⟩ = 0
⇒ ⟨xi, Axi⟩ − ⟨xi, λixi⟩ = 0
⇒ ⟨Axi, xi⟩ − ⟨xi, λixi⟩ = 0 Because Axi = λixi ⇒ ⟨λixi, xi⟩ − ⟨xi, λixi⟩ = 0
⇒ ( λ ̄ i − λ i ) ⟨ x i , x i ⟩ = 0
Ding Zhao (CMU) M1-2: Linear Transformations 10 / 37

Symmetric Matrix (cont.)
Orthogonal:
Axj = λjxj Apply innner product with xi
⇒ ⟨xi,Axj⟩ = ⟨xi,λjxj⟩ Symmetric Matrix
⇒ ⟨Axi,xj⟩ = ⟨xi,λjxj⟩ A-invariant
⇒ ⟨λixi,xj⟩ = ⟨xi,λjxj⟩
⇒ (λi − λ ̄j )⟨xi, xj ⟩ = 0 real e-values
⇒(λi −λj)⟨xi,xj⟩=0
⇒ ⟨xi,xj⟩ = 0 Note: This applies for distinct e-values, but the conclusion also applies to repeated e-values; i.g. we still have n orthogonal e-vectors.
The same conclusion also applies to complex number matrix. In this case, A = A ̄ T is called Hermitian matrix.
Ding Zhao (CMU) M1-2: Linear Transformations 11 / 37

Symmetric Matrix (cont.)
Given A real, it is a Symmetric Matrix: AT = A.
All its eigenvalues are real
A has n orthogonal eigenvectors. ⇒A=MAˆM−1=MAˆMT.AˆisdiagonalwithrealvaluesandcolumnsofMare
orthogonal.
Ding Zhao (CMU) M1-2: Linear Transformations 12 / 37

Positive Definiteness
A symmetric matrix P is positive definite if for ∀x ̸= 0, xT Px > 0. It is positive semidefinite if ∀x ̸= 0, xT Px ≥ 0.
A symmetric matrix is positive definte (semidefinite) if either of the conditions below holds
Every eigenvalue of P is positive (nonnegative)
xTPx = xTMPˆMTx = (MTx)TPˆ(MTx) = zTPˆz = 􏰄piizi2 ≥ 0 where z = MT x
If exist A such that P = AT A, P is positive definite if P is nonsingular, otherwise it is positive semidefinite.
xTPx = xTATAx = (Ax)T(Ax) = ⟨Ax,Ax⟩ ≥ 0
AT A is positive semidefinite thus has nonnegative eigenvalues.
Ding Zhao (CMU) M1-2: Linear Transformations 13 / 37

Singular Value Decomposition
∀A ∈ R(m×n) we can do the Singular Value Decomposition s.t.
 σ1  
  ..   .0
v1 ··· vr ··· vn  
A=UΣV=u1 ··· ur ··· um  σ r
00
whereσ1 ≥σ2 ≥σ3···≥σr ≥0,U∈Rm×m,Σ∈Rm×n,V∈Rn×n. ui andvi form
orthonormal bases for Rm and Rn, e.g. T􏰂1i=jT􏰂1i=jTT
vivj= 0 i̸=j , uiuj= 0 i̸=j ⇒UU=VV=I
Ding Zhao (CMU) M1-2: Linear Transformations 14 / 37

Derivation (OTB)
Ding Zhao (CMU) M1-2: Linear Transformations 15 / 37

Derivation (cont.)
With A ∈ R(m×n) , let vi be the orthonormal eigenvectors of AT A ∈ R(n×n) with nonnegative eigenvalues σi2 , σi ≥ 0, such that AT Avi = σi2 vi where i = 1,…,n.
Without loss of generality, let v1, . . . , vr be the basis for the row space of A, where r is the rank for A. Let ui be defined as Avi = σiui where
σi > 0, i = 1, . . . , r. Then ui is the basis of the column space, because if v1, v2 independent, but u1, u2 dependent, then u1 = ku2, where k ̸= 0 ⇒A( 1 v1 − 1 v2)=0. 1 v1 − 1 v2 inthenullspacebutalsointherowspaceso 1 v1 − 1 v2 =0whichcontradictsv1 andv2
Multiply A to each side of the equation AT Avi = σi2vi, we get AAT Avi = σi2Avi, replace Avi with σiui, we have AAT ui = σi2ui ui and σi are actually the eigenvalues and orthogonal eigenvectors of AAT .
Multiply viT to each side of the equation, we have viT AT Avi = σi2viT vi. Because vi are orthonormal, viT vi = 1.
viT AT Avi = (Avi)T Avi = ∥Avi∥2 = σi2 ⇒ ∥Avi∥ = σi. ui = Avi/σi = Avi/∥Avi∥ ⇒ ui turns out to the normalized vector. Therefore, ui is orthonormal as well.
The whole construction is called the singular value decomposition (SVD). The equations Avi = σiui mean that AV = UΣ. Then multiplication by V T gives
σ1k σ2 σ1k σ2 σ1k σ2 independent.

 σ1 
.  .
T    .  A=UΣV.AV=Av1 ··· vr ··· vn=u1 ··· ur ··· un
= UΣ where


 
 σr  
 
 σ12
 .
 σ12
 .
..
AAT columns of V = eigenvectors of AT A Note: σ12 . . . σr2 eigenvalues of AAT and AT A
Ding Zhao (CMU) M1-2: Linear Transformations 16 / 37
σ1 ≥σ2 ≥σ3···≥σr AAT =U
 σr2   σr2 
0 UT,ATA=V
00 00
..
0 VT,columnsofU=eigenvectorsof

How Does Singular Values Deal with Numerical Issues
􏰀1 1010 􏰁 TherankofA= 0 1 ⇒rank=2
􏰀1 1010􏰁 􏰀0 0􏰁 CalculaterankofA+E= 10−10 1 whereE= 10−10 0 ⇒rank=1
Let: σ1 ≥σ2 ≥···≥σr >0besingularvaluesofA
σ1(A) = 1010, σ2(A) ≈ 10−10, where σ1(E) = Max singular value of E SVD is the most robust way of computing rank of a matrix.
Ding Zhao (CMU) M1-2: Linear Transformations 17 / 37

Then why do we want to study Jordan Canonical Form?
We want to use invariant subspace to efficiently compute functions of matrices (dynamic systems).
Ding Zhao (CMU) M1-2: Linear Transformations 18 / 37

Recap: Singular Value Decomposition
A real matrix is a symmetric matrix iff AT = A. All its eigenvalues are real
A has n orthogonal eigenvectors.
A symmetric matrix P is positive definite if ∀x ̸= 0, xT Px > 0. It is positive semidefinite if xT Px ≥ 0. A symmetric matrix is positive definte (semidefinite) if either of the conditions below holds
Every eigenvalue of P is positive (nonnegative)
If ∃A such that P = AT A, P is positive definite if P is nonsingular, otherwise it is positive semidefinite.
∀A ∈ R(m×n) singular value decomposition decomposes
 σ1  
 
A=UΣV=u1 ··· ur ··· um    
0   v1 ··· vr ··· vn 
whereσ1 ≥σ2 ≥σ3···≥σr ≥0,U∈Rm×m,Σ∈Rm×n,V∈Rn×n. ui andvi form orthonormal bases for Rm and Rn

σ r 00
  
Ding Zhao (CMU) M1-2: Linear Transformations 19 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 20 / 37

Module 1-2: Linear Transformations
Function Of Matrix Linear Control Systems (2019)
Ding Zhao
Assistant Professor College of Engineering School of Computer Science
Carnegie Mellon University
Ding Zhao (CMU)
M1-2: Linear Transformations 21 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 22 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 23 / 37

Ak
A lot of times, we are particularly interested in one composition:
Ak Diagonalization helps a lot here because:
A = MΛM−1
Ak = MΛM−1MΛM−1 . . . MΛM−1 = MΛkM−1
Even the jordan form helps a lot, because
A = MJM−1
Ak = MJM−1MJM−1 . . . MJM−1 = MJkM−1
Ding Zhao (CMU) M1-2: Linear Transformations 24 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 25 / 37

Cayley Hamilton Theorem
Given A ∈ Rn×n with characteristic polynomial
∆(λ) = det(λI − A) = λn + αn−1λn−1 + . . . + α1λ + α0. Then,
∆(A)=An +αn−1An−1 +…+α1A+α0I=0 That is, A satisfies its own characteristic equation.
An =−αn−1An−1 −αn−2An−2 −…−α1A−α0I
Ding Zhao (CMU) M1-2: Linear Transformations 26 / 37

Application of Cayley Hamilton Theorem: A−1
A quick application:
∆(A)=An +αn−1An−1 +…+α1A+α0I=0 −α0I=A(An−1 +αn−1An−2 +…+α1)
If α0 ̸= 0, we can compute
A−1 =− 1 (An−1 +αn−1An−2 +…+α1) α0
Ding Zhao (CMU)
M1-2: Linear Transformations 27 / 37

Application of Cayley Hamilton Theorem: p(A)
Another application is to calculate the polynomial functions of A ∈ Rn×n. Let p(A) = kmAm + … + k1A + k0I and m be an arbitrary integer.
∆(A)=An +αn−1An−1 +…+α1A+α0I =0 ⇒An =−αn−1An−1 −αn−2An−2 −…−α1A−α0I
An+1 =AAn =−αn−1An −αn−2An−1 −…−α1A2 −α0A =−αn−1(−αn−1An−1 −…−α1A−α0I)
−αn−2An−1 −…−α1A2 −α0A
This implies that any polynomial p(λ), no matter its degree, can be written as
f(A)=βn−1An−1 +…+β1A+β0I
Ding Zhao (CMU) M1-2: Linear Transformations 28 / 37

Application of Cayley Hamilton Theorem: p(A)
We can do the same thing for the characteristic equation:
∆(λ)=λn +αn−1λn−1 +…+α1λ+α0 =0 ⇒λn =−αn−1λn−1 −…−α1λ−α0
λn+1 =λλn =−αn−1λn −αn−2λn−1 −…−α1λ2 −α0λ =−αn−1(−αn−1λn−1 −…−α1λ−α0)
−αn−2λn−1 −…−α1λ2 −α0λ ⇒p(λ)=g(λ)=βn−1λn−1 +…+β1λ+β0
where p(λ) is an arbitrary dimension polynomial and g(λ) is an (n − 1)st order polynomial in λ.
Ding Zhao (CMU) M1-2: Linear Transformations 29 / 37

Extend Cayley Hamilton Theorem to Any Analytic Function
Since we know how to compute P(A), we can actually compute any function f(x) such that it can be approximated by a polynomial series that converges. Typical examples include:
sin(A)=A−A3 +A5 −A7 +… 3! 5! 7!
cos(A)=I−A2 +A4 −A6 +… 2! 4! 6!
eAt=I+At+A2t2 +A2t2 +…⇒ 2! 2!
Ding Zhao (CMU)
M1-2: Linear Transformations
30 / 37

Application of Cayley Hamilton Theorem
When eigenvalues λi are distinct, solve βi from the n equations: f(λ1) = g(β1:n,λ1)
f(λ2) = g(β1:n,λ2) .
f(λn) = g(β1:n,λn) Then calculate f(A) = g(A) given the βi.
Ding Zhao (CMU) M1-2: Linear Transformations 31 / 37

Example (OTB)
At 􏰞1 2􏰟 where A = 0 3
Find matrix for e
The characteristic polynomial is ∆(λ) = (λ − 1)(λ − 3). The eigenvalues are λ = 1, 3. Recall:
f(λ)=g(β,λ)=βn−1λn−1 +…+β1λ+β0
f(λ)=eλ1t =β λ +β et =β +β 􏰉 β = 3et−e3t
1101002 f(λ)=eλ2t =β λ +β ⇒ e3t =3β +β ⇒ e3t−et
f(A) = e
At
120 10β1=2 􏰞et e3t−et􏰟
= β1A + β0I = 0 e3t
Ding Zhao (CMU)
M1-2: Linear Transformations
32 / 37

Example: Repeated Eigenvalues
At 􏰞1 2􏰟 where A = 0 1
Find matrix for e
Repeated eigenavlues. The characteristic polynomial is ∆(λ) = (λ − 1)2. The eigenvalues are
λ=1. Recall: f(λ)=βn−1λn−1 +…+β1λ+β0
et =β1 +β0
⇒ t ⇒ te=β1
f(λ)=eλt =β1λ+β0 df(λ) dg(λ) λt
􏰂 β0 =et −tet
t
dλ = dλ ⇒te =β1 At 􏰞et 2tet􏰟
β1=te
f(A)=e =β1A+β0I= 0 et
Ding Zhao (CMU) M1-2: Linear Transformations
33 / 37

Repeated Eigenvalues
When λi repeats m times, use the derivatives of f and g to populate the equations df(λi) = dg(λi)
dλi dλi d2f(λi) = d2g(λi)
dλ2i
d(m−1)f(λi) = d(m−1)g(λi) dλm−1 dλm−1 ii
.
dλ2i
Ding Zhao (CMU)
M1-2: Linear Transformations
34 / 37

Table of Contents
1 Limitations of Jordan Decomposition
2 Singular Value Decomposition
3 Functions of Matrices
4 Cayley Hamilton Theorem
Ding Zhao (CMU) M1-2: Linear Transformations 35 / 37

Recap: Cayley Hamilton Theorem
Given A ∈ Rn×n with characteristic polynomial
∆(λ) = det(λI − A) = λn + αn−1λn−1 + . . . + α1λ + α0. Then,
∆(A)=An +αn−1An−1 +…+α1A+α0I=0 3e That is, A satisfies its own characteristic equation.
An =−αn−1An−1 −αn−2An−2 −…−α1A−α0I
Ding Zhao (CMU) M1-2: Linear Transformations 36 / 37

Recap: Application of the Cayley Hamilton Theorem
Given A ∈ Rn×n with eigenvalues λi, let f(x) be an arbitrary scalar function and g(x) an
(n − 1)st order polinomial. When eigenvalues λi are distinct, solve βi from the n equations:
f(λ1) = g(β1,λ1) f(λ2) = g(β2,λ2)
.
.f(λn) = g(βn,λn)
When λi repeats m times, use the derivatives of f and g to populate the equations
df(λi) = dg(λi) dλi dλi
d2f(λi) = d2g(λi) dλ2i dλ2i
.
d(m−1)f(λi) = d(m−1)g(λi) dλm−1 dλm−1
Then calculate f(A) = g(A) given the β . iii
Ding Zhao (CMU) M1-2: Linear Transformations 37 / 37