24677 2019 MidTerm1 Cheat-sheet Types of systems
Normed Space Examples
Every inner product (X , F , <, >) generates a norm space: eg.(n,):x=(x1,…,xn)T
• Linearsystems:
• Time-invariantsystems:giveny(t)=f(x(t)),
f (αx +βy)=αf (x)+βf (y) y(t −τ)= f (x(t −τ))
h(t)=0fort <0
1 1 • ||x||2 = ni=1 |xi |2 2 =< x, x >2
1 1 • ||x||p= ni=1|xi|p p =
• ||x||1=ni=1|xi| • ||x||∞ =max|xi|
Gram-Schmidt Process
Euclidean norm – Distance
• Causalsystems:
Linearization
Consider the nonlinear system:
x ̇ = f ( x , u ) , f : n × m →
x ̄∈n isanequilibriumpoint:
∃u ̄ ∈m,s.t.f (x ̄,u ̄)=0
Define the deviation variables:
δ x = x − x ̄ , δ u = u − u ̄ ⇒ x = δ x + x ̄ , u = δ u + u ̄
Givenabasis{y1,…,yn}findanewbasis{v1,…,vn}thatisorthonormalandisa basis for span {y 1,…, y n }.
v1=y1 v2=y2−a21v1andchoosea21suchthat〈v1,v2〉=0
1212 11 〈v1,y2〉
v3 =y3−a31v1−a32v2 andchoosea31,anda32 suchthat
13 13 11 12 〈v1,y3〉 0=〈v ,v 〉=〈v ,y 〉−a31〈v ,v 〉−a32〈v ,v 〉⇒a31 = ∥v1∥2
A= Fields
∂f n×n ∂f n×m ∂f |x=x ̄,u=u ̄ ∈ ,B= |x=x ̄,u=u ̄ ∈ , ∂x∂u∂x
iscalled”Jacobian”
(1) (2)
(3)
•
•
Orthogonal Complement Space
Let (V , F, <, >) be an inner product space. Let U ⊆ V be a subspace.
U⊥ =v ∈V istheorthogonalcomplementofU,suchthat ∀u ∈U,
Sum of Spaces and Orthogonal Direct Sum
Wecanclaim:
⊥
• U ⊆V isasubspace
• V =UU⊥ =U 1 U⊥,whereiscalledsumofspacesand 1 U⊥ is
0=〈v2,v3〉=〈v2,y3〉−a
k k k−1
〈v2,v2〉⇒a = 〈v2,y3〉 32 32 ∥v2∥2
〈v2,v1〉−a Orthogonal:v =y − j=1 ∥vj∥2 ·v
31
Let F be a set with at least 2 elements, assume F has 2 operations:
“+′′ : F × F → F (addition) and “·′′ : F × F → F (multiplication). F is called a
field if:
• A0 : ∀α, β ∈ F , ∃α + β ∈ F ⇒ Closure under Addition
• A1 :∀α,β ∈F,α+β =β +α⇒ Commutativity
• A2 :∀α,β,γ∈F,(α+β)+γ=α+(β +γ)⇒ Associativity • A3 :∃0∈F,∀α∈F,α+0=α⇒ Neutral
• A4 :∀α∈F,∃(−α)∈F,α+(−α)=0⇒ Inverse
• M0 :∀α,β ∈F,∃α·β ∈F ⇒ ClosureunderMultiplication • M1 :∀α,β ∈F,α·β =β·α⇒ Commutativity
• M2 :∀α,β ∈F,(α·β)·γ=α·(β·γ)⇒ Associativity
• M3 :∃1∈F,∀α∈F,α·1=α⇒ Neutral
−1 −1
• M4 :∀α̸=0,∃α ,α·α =1⇒ Inverse
• D:α,β,γ∈F,α·(β +α)=α·β +α·β ⇒ Distributivity VectorSpace
LetF beafield,letV beasetthathasan”addition”operation“+′′ :V ×V →V. V is called a vector space over F if:
• A0:∀x,y∈V,∃x+y∈V ClosureunderAddition
• A1:∀x,y∈V,x+y=y+xCommutativity
• A2:∀x,y,z∈V,(x+y)+z=x+(y+z)Associativity • A3:∈V,∀x∈V,x+=xNeutral
• A4:∀x∈V,∃(−x)∈V,x+(−x)=Inverse
• SM0:∀α∈F,∀x∈V,∃α·x∈V ClosureunderScalarMultiplication • SM :∀α,β∈F,∃x∈V,(α·β)x=α(β·x)ScalarAssociativity
• SM2 :∀α∈F,∀x,y,∈V ,α(x +y)=αx +αy Scalar-VectorDistributive
• SM3 :∀α,β ∈F,∀x ∈V ,(α+β)x =αx +βx Vector-ScalarDistributive
• SM :∀x∈V,1·x=xNeutral 4
Inner Products
Let(X,)beavectorspace,afunction〈·,·〉:X ×X →isaninnerproductif (1) Conjugatesymmetry:
〈x,y〉=〈y,x〉 (2) Linearityinthefirstargument:
〈ax,y〉=a〈x,y〉
〈x +y,z〉=〈x,z〉+〈y,z〉
(3) Positivedefiniteness: 〈x,x〉≥0
j
Orthonormal: Normalize{v1,…,vn}as{ v1 , v2 ,…, vn }
∥v2∥ ∥v2∥ ∥vn∥
called orthogonal direct sum i.e.
∀x ∈V,∃x1 ∈U,∃x2 ∈U⊥,x =x1 +x2,and
Dim(V)=Dim(U)+Dim(U⊥) Summary of Linear Space
Subspace (X , F ) ⊂ (V , F ) Vector Projection x → u
(X ,F )
Field
@D. Zhao, 2018
∀y ∈U,∃u∈U,
Coordinates n
x= i=1βivi
ScalarMultiplication (S M 0:4 )
Combination ni=1αixi Independent
n αixi =0→α= i=1
Change of space: Transformation
1
X⊥∩X=0,XX⊥=V Innerproduct
(xT y →, x,y ∈n) Norm ||x ||
||x||2 =
Highlights of Gaussian Elimination
Orthogonal Basisv
Orthornomal
Changeofbasis
(Gram-Schmidt)
(X 1,F)
(X 2,F)
〈x,x〉=0⇔x =0.
A vector space with a well-defined inner product is called an inner product (vector)
Use elementary row operations to reduce the augmented matrix to a form such that
• Thefirstnonzeroentryofeachrowshouldbeontheright-handsideof the first non zero entry of the preceding row. Simply put, the coefficient part (corresponding to A) of the augmented matrix should form an n ×n upper triangular matrix.
• Anyzerorowshouldbeatthebottomofthematrix.
Gaussian elimination uses the row reduced form of the augmented matrix to com-
space, denoted as (X , F , <, >)
pactly solve a given system of linear equations
Determinant
The determinant is a function: A ∈ n ×n →
Let Pn be the set of permutations of {1, 2, 3, …, n } with Pn = n ! =
Angle between vectors and Orthogonality
Inspired from the geometric space, the angle between two vectors is defined as: < x , y >
n
k =1 k arrays. Let
∠x,y =arccos
pibetheitharrayinPn.Defineσ(pi)=(−1)i−1asthesignatureofpi.Definethe determinant of A as
n det(A)= σ(pi)ajpi(j)
pi∈Pn j=1
whereaij =ith rowand jth columnofAandpi(j)isthe jth iteminpi.
Note: The determint determines whether A has full rank thus it also determines whether Ax=y has a unique solution.
Properties of Determinant
Given A,B ∈ C(n×n): det(A) ̸= 0 ⇐⇒ A is invertible ⇐⇒ A has full rank ⇐⇒ A is nonsingular ⇐⇒ rows or columns independent
Properties:
< xi , xi >= 1, 1 i n
Norm and Normed Space
A linear space (X , F ) is called normed if any real-valued function of x ∈ X , de-
noted by ∥x∥, can be defined with the following properties:
(1) Positive definiteness: ∥x∥ is non-negative for every x and ∥x∥= 0 if and onlyifx=0
(2) Absolutehomogeneity: ∥αx∥=|α|∥x∥,foranyrealα
(3) Triangularinequality: ∥x1+x2∥≤∥x1∥+∥x2∥
• det(αA) = αn det(A)
• det(AT ) = det(A)
• det(I)=1
• det(AB) = det(A)det(B)
• det(A−1) = 1/det(A)
• If A is triangular matrix,
detA=diag(A)
Note: that is why |Ep,q (α)| = 1
Invariant Subspaces
Let(V,F)beavectorspace,if: V →V belinearandU ⊆V beasubspace. We say that U is invariant iff:
∀u ∈ U ,A (u) ∈ U .
V is simple if its all invariant subspaces are V and 0. V is simple iff V is a space
spanned by one eigenvector of A.
V is semisimple if it is a direct sum of one-dimensional invariant subspaces, i.e.
V =span{v1}span{v2}···span{vn}=span{v1…vn}, i.e. there are n eigenvectors that span the n-dimensional V .
Characteristic polynomial:
φ(λ)=det(λI −A)=∥λI −A∥
Solutions of Linear Algebraic Equations
Consider Ax = y
withA∈Cm×n,y∈Cm aregivenandx∈Cn containsunknows.W=[A|y]
• Existence:
Asolutionexistsiff y ∈R(A)⇔ρ(A)=ρ(W)
• Uniqueness:
A unique solution exists iff ρ(A) = ρ(W) = n ⇔ columns of A are linear
independent⇔Aisinjective φ(λ)=(λ−λ1)(λ−λ2) ̇(λ−λn)
• Generalsolution: withAxp =yandAxh =0
where φ(λ) is monic and n degree. Let λ1,λ2,…,λn be the roots of φ(λ) i.e. Geometricandalgebraicmultiplicities
Recall the characteristic polynomial of a matrix ∆(λ) = det(λI − A)
Write ∆(λ) = (λ − λ1)m1 (λ − λ2)m2 …(λ − λp )mp where λ1,…,λp are distinct and m1 +m2 +···+mp =n. mi iscalledthealgebraicmultiplicityofΛi. Thegeometricmutiplicityofλi is
qi = nullityof(A−λiI)=dimN(A−λiI)
= #of linearly independent eigenvector associated to λi
Jordan Canonical Form
Theorem (Jordan Canonical Form): Let A be an n × n matrix with eigenvalues
x=xp +xh Summary of Linear Transformation
X
v1 n
r T1: dimR(A)+dimN (A)=dimX r
Y
Row space R(AT )
T2: R(AT ) 1 N (A) = X T3: dimR(AT ) = dimR(A)
Axr = y
Column space R(A)
vr
ur
u 1p 1p 1p
xr
0 x=xn+xr
y
λ ,…,λ ofalgebraicmultiplicitiesm ,…,m andgeometricmultiplicitiesq ,…,q .
Ax = y
Axn =0
m−r
Change of Basis
The jth column of A contains the The jth column of A contains coordinates of A (vj ) in {u1…um } the coordinates of vj in {vˆ1…vˆn }
x,{v1:n}∈X −→ y,{u1:m}∈Y x,{v1:n}∈X −→ xˆ,{vˆ1:n}∈X [·]v ↕ ↕ [·]u [·]v ↕ ↕ [·]vˆ
p ×n
Aˆ= ..
Linear Transformation vs Change of Basis
0 … 0 0
0 ˆ
i j 0 . 1 0 0 λi
AA
1
Then ∃ an invertible matrix M such that A = M AˆM
Aˆ 0 0 0
−1
m
0
, where
#blocks = #distinct e-values
vr+1 xn vn Nullspace
N(A)
n−r
ur+1 1
um
ˆ 0A00
Left 2
nullspace Aˆ =
N(AT) .
. 00.0
0 0 0 Aˆ Aˆi1 0 0 0
0 Aˆ 0 0 i2
λ 1 0 i
Aˆ=
i 0
0
Singular Value Decomposition
Linear Transformation
Ai qi
#blocks=#indep e-vectors assoc. with λi
the dimension is not fixed, bu
mi ×mi
α 1 β 1 α 1 αˆ 1 .nA.m.A.
α= . ∈F −→ β= . ∈F α= . ∈Fn −→ αˆ= . ∈Fn αn βm αn αˆm
Similar Matrices
Two square matrices A and Aˆ are similar if there exists an invertible matrix P such
that
A = P Aˆ P − 1
A
[x]{ui } −→ [y]{ui } ↖↗
−1 nAn
P↓ x∈C−→y∈C P↑
↙↘
Aˆ [x]{vi } −→
[y]{vi }
P = [v1]{ui },…,[vn]{ui }
If A is semisimple. Definition of the matrix representation of A in the basis of its
Cayley Hamilton Theorem
GivenA∈n×n withcharacteristicpolynomial
Diagonalization eigenvectors {v1 . . . vn }:
∆(λ)=det(λI −A)=λn +αn−1λn−1 +…+α1λ+α0. Then, ∆(A)=An +αn−1An−1 +…+α1A+α0I=0
That is, A satisfies its own characteristic equation.
Λ
[x]{vi } −→ A=MΛM
A
−→ [y]{ei }
[x]{ei }
−1 nAn Application of Cayley Hamilton Theorem
An =−α An−1 −α An−2 −…−α A−α I ↖↗ n−1n−210
M↓ x∈C−→y∈C M↑
↙ ↘ n×n
[y]{vi } M=[v1]{ei},…,[vn]{ei}=[v1,…,vn],Avi =λivi
Given A ∈ with eigenvalues λi ,
−1
let f (x ) be an arbitrary scalar func- When λ repeats m times, use the
tion and g(x) an (n −1)st order poly- i
tinct,solveβi fromthenequations: f (λ1)=g(β1,λ1)
f(λ )=g(β ,λ ) 2 22
nomial. When eigenvalues λi are dis- derivatives of f and g to populate
theequations
Jordan Canonical Form
If A is semisimple. Definition of the matrix representation of A in the basis of its
d f (λi ) = d g (λi ) dλi dλi
d2f(λi) = d2g(λi) dλ2i dλ2i
eigenvectors {v1 . . . vn }: [x]{ei }
↖
. A..
−→
A
↗
[y]{ei } M↑ [y]{vi }
f (λn)=g(βn,λn)
Then calculate f (A) = g (A) given the
βi.
.
d(m−1) f (λi ) d(m−1)g(λi ) dλm−1 = dλm−1 ii
M−1↓ x∈Cn −→ y∈Cn ↙↘
J
[x]{vi } −→
A=MJM−1
M=[v1]{ei},…,[vn]{ei}=[v1,…,vn],vi areeigenvectorsandgeneratedeigenvec- tors.