handoutF.dvi
ECS130 Scientific Computation Handout F Feburary 15, 2017
1. Singular Value Decomposition (SVD)
let A be an m-by-n matrix with m ≥ n.1 Then we can write
A = UΣV T ,
where U is m-by-n orthogonal matrix (UTU = In) and V is n-by-n orthogonal matrix (V
TV = I),
and Σ = diag(σ1, σ2, . . . , σn), where σ1 ≥ σ2 ≥ · · · ≥ σn ≥ 0.
σ1, σ2, . . . , σn are called singular values. The columns {ui} of U are called left singular vectors of
A. The columns {vi} of V are called right singular vectors.
2. Connection/difference between eigenvalues and singular values.
(a) eigenvalues of ATA are σ2i , i = 1, 2, . . . , n. The corresponding eigenvectors are the right
singular vectors vi, i = 1, 2, . . . , n.
(b) eigenvalues of AAT are σ2i , i = 1, 2, . . . , n and m − n zeros. The left singular vectors ui,
i = 1, 2, . . . , n are corresponding eigenvectors for the eigenvalues σ2i . One can take any m−n
other orthogonal vectors that are orthogonal to u1, u2, . . . , un as the eigenvectors for the zero
eigenvalues.
3. Suppose that A has full column rank, then the pseudo-inverse can also be written as2
A+ ≡ (ATA)−1AT = V Σ−1UT .
4. Suppose that
σ1 ≥ σ2 ≥ · · · ≥ σr > σr+1 = · · · = σn = 0,
Then
(a) the rank of A is r,
(b) the range of A is spanned by [u1, u2, · · · , ur].
(c) the nullspace of A is spanned by [vr+1, vr+2, . . . , vn].
5. ‖A‖2 = σ1 =
√
λmax(ATA).
6. Assume rank(A) = r, then the SVD of A can be rewritten as
A = E1 + E2 + · · ·+ Er
where Ek for i = 1, 2, . . . , r is a rank-one matrix of the form
Ek = σkukv
T
k ,
and is referred to as the k-th component matrix.
1If m < n, the SVD can be defined by considering AT . 2If m < n, then A+ = AT (AAT )−1. 1 Component matrices are orthogonal to each other, i.e., EjE T k = 0, j 6= k. Furthermore, since ‖Ek‖2 = σk, we know that ‖E1‖2 ≥ ‖E2‖2 ≥ · · · ≥ ‖Er‖2. It means that the contribution each Ek makes to reproduce A is determined by the size of the singular value σk. 7. Optimal rank-k approximation: min B : m× n rank(B) = k ‖A−B‖2 = ‖A−Ak‖2 = σk+1, where Ak = E1 + E2 + · · ·+ Ek. Note that Ak can be rewritten as Ak = UkΣkV T k , where Σk = diag(σ1, σ2, . . . , σk), Uk and Vk are the first k columns of U and V , respectively. 8. The problem of applying the leading k components of A to analyze the data in the matrix A is called Principal Component Analysis (PCA). 9. An application of PCA for loss data compression. Note that Ak can be represented by mk + k + nk = (m + n + 1)k elements, in contrast, A is represented by mn elements. Therefore, we have compression ratio = (m+ n+ 1)k mn Matlab script: svd4image.m 2