Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 1/28
Chapter 1
Mathematical Background:
Linear Algebra
Multiple View Geometry
Summer 2021
Prof. Daniel Cremers
Chair for Computer Vision and Artificial Intelligence
Departments of Informatics & Mathematics
Technical University of Munich
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 2/28
Overview
1 Vector Spaces
2 Linear Transformations and Matrices
3 Properties of Matrices
4 Singular Value Decomposition
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 3/28
Vector Space (Vektorraum)
A set V is called a linear space or a vector space over the field
R if it is closed under vector summation
+ : V × V → V
and under scalar multiplication
· : R× V → V ,
i.e. αv1 + βv2 ∈ V ∀v1, v2 ∈ V , ∀α, β ∈ R. With respect to
addition (+) it forms a commutative group (existence of neutral
element 0, inverse element −v ). Scalar multiplication respects
the structure of R: α(βu) = (αβ)u. Multiplication and addition
respect the distributive law:
(α + β)v = αv + βv and α(v + u) = αv + αu.
Example: V = Rn, v = (x1, . . . , xn)>.
A subset W ⊂ V of a vector space V is called subspace if
0 ∈W and W is closed under + and · (for all α ∈ R).
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 4/28
Linear Independence and Basis
The spanned subspace of a set of vectors
S = {v1, . . . , vk} ⊂ V is the subspace formed by all linear
combinations of these vectors:
span(S) =
{
v ∈ V
∣∣∣ v = k∑
i=1
αivi
}
The set S is called linearly independent if:
k∑
i=1
αivi = 0 ⇒ αi = 0 ∀i ,
in other words: if none of the vectors can be expressed as a
linear combination of the remaining vectors. Otherwise the set
is called linearly dependent.
A set of vectors B = {v1, . . . , vn} is called a basis of V if it is
linearly independent and if it spans the vector space V . A basis
is a maximal set of linearly independent vectors.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 5/28
Properties of a Basis
Let B and B′ be two bases of a linear space V .
1 B and B′ contain the same number of vectors. This
number n is called the dimension of the space V .
2 Any vector v ∈ V can be uniquely expressed as a linear
combination of the basis vectors in B = {b1, . . . ,bn}:
v =
n∑
i=1
αibi .
3 In particular, all vectors of B can be expressed as linear
combinations of vectors of another basis b′i ∈ B
′:
b′i =
n∑
j=1
αjibj
The coefficients αji for this basis transform can be
combined in a matrix A. Setting B ≡ (b1, . . . ,bn) and
B′ ≡ (b′1, . . . ,b
′
n) as the matrices of basis vectors, we can
write: B′ = BA ⇔ B = B′A−1.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 6/28
Inner Product
On a vector space one can define an inner product (dot
product, dt.: Skalarprodukt 6= skalare Multiplikation):
〈·, ·〉 : V × V → R
which is defined by three properties:
1 〈u, αv + βw〉 = α〈u, v〉+ β〈u,w〉 (linear)
2 〈u, v〉 = 〈v ,u〉 (symmetric)
3 〈v , v〉 ≥ 0 and 〈v , v〉 = 0⇔ v = 0 (positive definite)
The scalar product induces a norm
| · | : V → R, |v | =
√
〈v , v〉
and a metric
d : V × V → R, d(v ,w) = |v − w | =
√
〈v − w , v − w〉
for measuring lengths and distances, making (V ,d) a metric
space. Since the metric is induced by a scalar product V is
called a Hilbert space.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 7/28
Canonical and Induced Inner Product
On V = Rn, one can define the canonical inner product for the
canonical basis B = In as
〈x , y〉 = x>y =
n∑
i=1
xiyi
which induces the standard L2-norm or Euclidean norm
|x |2 =
√
x>x =
√
x21 + · · ·+ x2n
With a basis transform A to the new basis B′ given by
I = B′A−1 the canonical inner product in the new coordinates
x ′, y ′ is given by:
〈x , y〉 = x>y = (A−1x ′)>(A−1y ′) = x ′> A−>A−1 y ′ ≡ 〈x ′, y ′〉A−>A−1
The latter product is called the induced inner product from the
matrix A.
Two vectors v and w are orthogonal iff 〈v ,w〉 = 0.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 8/28
Kronecker Product and Stack of a Matrix
Given two matrices A ∈ Rm×n and B ∈ Rk×l , one can define
their Kronecker product A⊗ B by:
A⊗ B ≡
a11B · · · a1nB… . . . …
am1B · · · amnB
∈ Rmk×nl .
In Matlab this can be implemented by C=kron(A,B).
Given a matrix A ∈ Rm×n, its stack As is obtained by stacking
its n column vectors a1, . . . ,an ∈ Rm:
As ≡
a1…
an
∈ Rmn.
These notations allow to rewrite algebraic expressions, for
example:
u> A v = (v ⊗ u)> As.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 9/28
Linear Transformations and Matrices
Linear algebra studies the properties of linear transformations
between linear spaces. Since these can be represented by
matrices, linear algebra studies the properties of matrices.
A linear transformation L between two linear spaces V and W
is a map L : V →W such that:
• L(x + y) = L(x) + L(y) ∀x , y ∈ V
• L(αx) = αL(x) ∀x ∈ V , α ∈ R.
Due to the linearity, the action of L on the space V is uniquely
defined by its action on the basis vectors of V . In the canonical
basis {e1, . . . ,en} we have:
L(x) = Ax ∀x ∈ V ,
where
A = (L(e1), . . . ,L(en)) ∈ Rm×n.
The set of all real m × n-matrices is denoted byM(m,n). In
the case that m = n, the setM(m,n) ≡M(n) forms a ring
over the field R, i.e. it is closed under matrix multiplication and
summation.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 10/28
The Linear Groups GL(n) and SL(n)
There exist certain sets of linear transformations which form a
group.
A group is a set G with an operation ◦ : G ×G→ G such that:
1 closed: g1 ◦ g2 ∈ G ∀g1,g2 ∈ G,
2 assoc.: (g1 ◦ g2) ◦ g3 = g1 ◦ (g2 ◦ g3) ∀g1,g2,g3 ∈ G,
3 neutral: ∃e ∈ G : e ◦ g = g ◦ e = g ∀g ∈ G,
4 inverse: ∃g−1 ∈ G : g ◦ g−1 = g−1 ◦ g = e ∀g ∈ G.
Example: All invertible (non-singular) real n × n-matrices form
a group with respect to matrix multiplication. This group is
called the general linear group GL(n). It consists of all
A ∈M(n) for which
det(A) 6= 0
All matrices A ∈ GL(n) for which det(A) = 1 form a group
called the special linear group SL(n). The inverse of A is also
in this group, as det(A−1) = det(A)−1
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 11/28
Matrix Representation of Groups
A group G has a matrix representation (dt.: Darstellung) or can
be realized as a matrix group if there exists an injective
transformation:
R : G→ GL(n)
which preserves the group structure of G, that is inverse and
composition are preserved by the map:
R(e) = In×n, R(g ◦ h) = R(g)R(h) ∀g,h ∈ G
Such a map R is called a group homomorphism (dt.
Homomorphismus).
The idea of matrix representations of a group is that they allow
to analyze more abstract groups by looking at the properties of
the respective matrix group. Example: The rotations of an
object form a group, as there exists a neutral element (no
rotation) and an inverse (the inverse rotation) and any
concatenation of rotations is again a rotation (around a
different axis). Studying the properties of the rotation group is
easier if rotations are represented by respective matrices.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 12/28
The Affine Group A(n)
An affine transformation L : Rn → Rn is defined by a matrix
A ∈ GL(n) and a vector b ∈ Rn such that:
L(x) = Ax + b
The set of all such affine transformations is called the affine
group of dimension n, denoted by A(n).
L defined above is not a linear map unless b = 0. By
introducing homogeneous coordinates to represent x ∈ Rn by(x
1
)
∈ Rn+1, L becomes a linear mapping from
L : Rn+1 → Rn+1;
(
x
1
)
7→
(
A b
0 1
)(
x
1
)
.
A matrix
(A b
0 1
)
with A ∈ GL(n) and b ∈ Rn is called an affine
matrix. It is an element of GL(n + 1). The affine matrices form
a subgroup of GL(n + 1). Why?
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 13/28
The Orthogonal Group O(n)
A matrix A ∈M(n) is called orthogonal if it preserves the inner
product, i.e.:
〈Ax ,Ay〉 = 〈x , y〉, ∀x , y ∈ Rn.
The set of all orthogonal matrices forms the orthogonal group
O(n), which is a subgroup of GL(n). For an orthogonal matrix
R we have
〈Rx ,Ry〉 = x>R>Ry = x>y , ∀x , y ∈ Rn
Therefore we must have R>R = RR> = I, in other words:
O(n) = {R ∈ GL(n) | R>R = I}
The above identity shows that for any orthogonal matrix R, we
have det(R>R) = (det(R))2 = det(I) = 1, such that
det(R) ∈ {±1}.
The subgroup of O(n) with det(R) = +1 is called the special
orthogonal group SO(n). SO(n) = O(n) ∩ SL(n). In particular,
SO(3) is the group of all 3-dimensional rotation matrices.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 14/28
The Euclidean Group E(n)
A Euclidean transformation L from Rn to Rn is defined by an
orthogonal matrix R ∈ O(n) and a vector T ∈ Rn:
L : Rn → Rn; x 7→ Rx + T .
The set of all such transformations is called the Euclidean
group E(n). It is a subgroup of the affine group A(n).
Embedded by homogeneous coordinates, we get:
E(n) =
{(
R T
0 1
) ∣∣∣∣∣ R ∈ O(n),T ∈ Rn
}
.
If R ∈ SO(n) (i.e. det(R) = 1), then we have the special
Euclidean group SE(n). In particular, SE(3) represents the
rigid-body motions (dt.: Starrkörpertransformationen) in R3.
In summary:
SO(n) ⊂ O(n) ⊂ GL(n), SE(n) ⊂ E(n) ⊂ A(n) ⊂ GL(n + 1).
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 15/28
Range, Span, Null Space and Kernel
Let A ∈ Rm×n be a matrix defining a linear map from Rn to Rm.
The range or span of A (dt.: Bild) is defined as the subspace of
Rm which can be ‘reached’ by A:
range(A) = {y ∈ Rm
∣∣ ∃x ∈ Rn : Ax = y}.
The range of a matrix A is given by the span of its column
vectors.
The null space or kernel of a matrix A (dt.: Kern) is given by the
subset of vectors x ∈ Rn which are mapped to zero:
null(A) ≡ ker(A) = {x ∈ Rn
∣∣ Ax = 0}.
The null space of a matrix A is given by the vectors orthogonal
to its row vectors. Matlab: Z=null(A).
The concepts of range and null space are useful when studying
the solution of linear equations. The system Ax = b will have a
solution x ∈ Rn if and only if b ∈ range(A). Moreover, this
solution will be unique only if ker(A) = {0}. Indeed, if xs is a
solution of Ax = b and xo ∈ ker(A), then xs +xo is also a
solution: A(xs +xo) = Axs + Axo = b.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 16/28
Rank of a Matrix
The rank of a matrix (dt. Rang) is the dimension of its range:
rank(A) = dim (range(A)) .
The rank of a matrix A ∈ Rm×n has the following properties:
1 rank(A) = n − dim(ker(A)).
2 0 ≤ rank(A) ≤ min{m,n}.
3 rank(A) is equal to the maximum number of linearly
independent row (or column) vectors of A.
4 rank(A) is the highest order of a nonzero minor of A,
where a minor of order k is the determinant of a k × k
submatrix of A.
5 Sylvester’s inequality: Let B ∈ Rn×k .
Then AB ∈ Rm×k and
rank(A) + rank(B)− n ≤ rank(AB) ≤ min{rank(A), rank(B)}.
6 For any nonsingular matrices C ∈ Rm×m and D ∈ Rn×n,
we have: rank(A) = rank(CAD). Matlab: d=rank(A).
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 17/28
Eigenvalues and Eigenvectors
Let A ∈ Cn×n be a complex matrix. A non-zero vector v ∈ Cn is
called a (right) eigenvector of A if:
Av = λv , with λ ∈ C.
λ is called an eigenvalue of A. Similarly v is called a left
eigenvector of A, if v>A = λv> for some λ ∈ C.
The spectrum σ(A) of a matrix A is the set of all its eigenvalues.
Matlab:
[V,D]=eig(A);
where D is a diagonal matrix containing the eigenvalues and V
is a matrix whose columns are the corresponding eigenvectors,
such that AV=VD.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 18/28
Properties of Eigenvalues and Eigenvectors
Let A ∈ Rn×n be a square matrix. Then:
1 If Av = λv for some λ ∈ R, then there also exists a
left-eigenvector η ∈ Rn : η>A = λη>. Hence
σ(A) = σ(A>).
2 The eigenvectors of a matrix A associated with different
eigenvalues are linearly independent.
3 All eigenvalues σ(A) are the roots of the characteristic
polynomial det(λI − A) = 0. Therefore det(A) is equal to
the product of all eigenvalues (some of which may appear
multiple times).
4 If B = PAP−1 for some nonsingular matrix P, then
σ(B) = σ(A).
5 If λ ∈ C is an eigenvalue, then its conjugate λ is also an
eigenvalue. Thus σ(A) = σ(A) for real matrices A.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 19/28
Symmetric Matrices
A matrix S ∈ Rn×n is called symmetric if S> = S. A symmetric
matrix S is called positive semi-definite (denoted by S ≥ 0 or
S � 0) if x>Sx ≥ 0. S is called positive definite (denoted by
S > 0 or S � 0) if x>Sx > 0 ∀x 6= 0.
Let S ∈ Rn×n be a real symmetric matrix. Then:
1 All eigenvalues of S are real, i.e. σ(S) ⊂ R.
2 Eigenvectors vi and vj of S corresponding to distinct
eigenvalues λi 6= λj are orthogonal.
3 There always exist n orthonormal eigenvectors of S which
form a basis of Rn. Let V = (v1, . . . , vn) ∈ O(n) be the
orthogonal matrix of these eigenvectors, and
Λ = diag{λ1, . . . , λn} the diagonal matrix of eigenvalues.
Then we have S = V Λ V>.
4 S is positive (semi-)definite, if all eigenvalues are positive
(nonnegative).
5 Let S be positive semi-definite and λ1, λn the largest and
smallest eigenvalue. Then λ1 = max|x|=1〈x ,Sx〉 and
λn = min|x|=1〈x ,Sx〉.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 20/28
Norms of Matrices
There are many ways to define norms on the space of matrices
A ∈ Rm×n. They can be defined based on norms on the
domain or codomain spaces on which A operates. In particular,
the induced 2-norm of a matrix A is defined as
||A||2 ≡ max|x|
2
=1
|Ax |2 = max|x|
2
=1
√
〈x ,A>Ax〉.
Alternatively, one can define the Frobenius norm of A as:
||A||f ≡
√∑
i,j
a2ij =
√
trace(A>A).
Note that these norms are in general not the same. Since the
matrix A>A is symmetric and pos. semi-definite, we can
diagonalize it as: A>A = V diag
{
σ21 , . . . , σ
2
n
}
V> with
σ21 ≥ σ
2
i ≥ 0. This leads to:
||A||2 = σ1, and ||A||f =
√
trace(A>A) =
√
σ21 + . . .+ σ
2
n .
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 21/28
Skew-symmetric Matrices
A matrix A ∈ Rn×n is called skew-symmetric or anti-symmetric
(dt. schiefsymmetrisch) if A> = −A.
If A is a real skew-symmetric matrix, then:
1 All eigenvalues of A are either zero or purely imaginary,
i.e. of the form iω with i2 = −1, ω ∈ R.
2 There exists an orthogonal matrix V such that
A = V Λ V>,
where Λ is a block-diagonal matrix
Λ = diag{A1, . . . ,Am,0, . . . ,0}, with real skew-symmetric
matrices Ai of the form:
Ai =
(
0 ai
−ai 0
)
∈ R2×2, i = 1, . . . ,m.
In particular, the rank of any skew-symmetric matrix is even.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 22/28
Examples of Skew-symmetric Matrices
In Computer Vision, a common skew-symmetric matrix is given
by the hat operator of a vector u ∈ R3 is:
û =
0 −u3 u2u3 0 −u1
−u2 u1 0
∈ R3×3.
This is a linear operator from the space of vectors R3 to the
space of skew-symmetric matrices in R3×3.
In particular, the matrix û has the property that
ûv = u × v ,
where × denotes the standard vector cross product in R3. For
u 6= 0, we have rank(û) = 2 and the null space of û is spanned
by u, because ûu = u>û = 0.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 23/28
The Singular Value Decomposition (SVD)
In the last slides, we have studied many properties of matrices,
such as rank, range, null space, and induced norms of
matrices. Many of these properties can be captured by the
so-called singular value decomposition (SVD).
SVD can be seen as a generalization of eigenvalues and
eigenvectors to non-square matrices. The computation of SVD
is numerically well-conditioned. It is very useful for solving
linear-algebraic problems such as matrix inversion, rank
computation, linear least-squares estimation, projections, and
fixed-rank approximations.
In practice, both singular value decomposition and eigenvalue
decompositions are used quite extensively.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 24/28
Algebraic Derivation of SVD
Let A ∈ Rm×n with m ≥ n be a matrix of rank(A) = p. Then
there exist
• U ∈ Rm×p whose columns are orthonormal
• V ∈ Rn×p whose columns are orthonormal, and
• Σ ∈ Rp×p,Σ = diag{σ1, . . . , σp}, with σ1 ≥ . . . ≥ σp,
such that
A = U ΣV>.
Note that this generalizes the eigenvalue decomposition. While
the latter decomposes a symmetric square matrix A with an
orthogonal transformation V as:
A = V Λ V>, with V ∈ O(n), Λ = diag{λ1, . . . , λn},
SVD allows to decompose an arbitrary (non-square) matrix A
of rank p with two transformations U and V with orthonormal
columns as shown above. Nevertheless, we will see that SVD
is based on the eigenvalue decomposition of symmetric square
matrices.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 25/28
Proof of SVD Decomposition 1
Given a matrix A ∈ Rm×n with m ≥ n and rank(A) = p, the
matrix
A>A ∈ Rn×n
is symmetric and positive semi-definite. Therefore it can be
decomposed with non-negative eigenvalues σ21 ≥ . . . ≥ σ
2
n ≥ 0
with orthonormal eigenvectors v1, . . . , vn. The σi are called
singular values. Since
ker(A>A) = ker(A) and range(A>A) = range(A>),
we have span{v1, . . . , vp} = range(A>) and
span{vp+1, . . . , vn} = ker(A). Let
ui ≡
1
σi
Avi ⇔ Avi = σiui , i = 1, . . . ,p
then the ui ∈ Rm are orthonormal:
〈ui ,uj〉 =
1
σiσj
〈Avi ,Avj〉 =
1
σiσj
〈vi ,A>Avj〉 = δij .
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 26/28
Proof of SVD Decomposition 2
Complete {ui}
p
i=1 to a basis {ui}
m
i=1 of Rm. Since Avi = σiui , we
have
A (v1, . . . , vn) = (u1, . . . ,um)
σ1 0 0 · · · 0
0
. . . 0
… 0
0 · · · σp
… 0
… · · · · · ·
… 0
0 · · · · · · 0 0
,
which is of the form AṼ = ŨΣ̃, thus
A = Ũ Σ̃ Ṽ>.
Now simply delete all columns of Ũ and the rows of Ṽ> which
are multiplied by zero singular values and we obtain the form
A = U Σ V>, with U ∈ Rm×p and V ∈ Rn×p.
In Matlab: [U,S,V] = svd(A).
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 27/28
A Geometric Interpretation of SVD
For A ∈ Rn×n, the singular value decomposition A = U Σ V> is
such that the columns U = (u1, . . . ,un) and V = (v1, . . . , vn)
form orthonormal bases of Rn. If a point x ∈ Rn is mapped to a
point y ∈ Rn by the transformation A, then the coordinates of y
in basis U are related to the coordinates of x in basis V by the
diagonal matrix Σ: each coordinate is merely scaled by the
corresponding singular value:
y = Ax = U Σ V>x ⇔ U>y = Σ V>x .
The matrix A maps the unit sphere into an ellipsoid with
semi-axes σiui . To see this, we call α ≡ V>x the coefficients of
the point x in the basis V and those of y in basis U shall be
called β ≡ U>y . All points of the circle fulfill |x |22 =
∑
i α
2
i = 1.
The above statement says that βi = σiαi . Thus for the points
on the sphere we have∑
i
α2i =
∑
i
β2i /σ
2
i = 1,
which states that the transformed points lie on an ellipsoid
oriented along the axes of the basis U.
Mathematical
Background:
Linear Algebra
Prof. Daniel Cremers
Vector Spaces
Linear Transformations
and Matrices
Properties of Matrices
Singular Value
Decomposition
updated April 26, 2021 28/28
The Generalized (Moore Penrose) Inverse
For certain quadratic matrices one can define an inverse
matrix, if det(A) 6= 0. The set of all invertible matrices forms the
group GL(n). One can also define a (generalized) inverse (also
called pseudo inverse) (dt.: Pseudoinverse) for an arbitrary
(non-quadratic) matrix A ∈ Rm×n. If its SVD is A = U Σ V> the
pseudo inverse is defined as:
A† = V Σ† U>, where Σ† =
(
Σ−11 0
0 0
)
n×m
,
where Σ1 is the diagonal matrix of non-zero singular values. In
Matlab: X=pinv(A). In particular, the pseudo inverse can be
employed in a similar fashion as the inverse of quadratic
invertible matrices:
AA†A = A, A†AA† = A†.
The linear system Ax = b with A ∈ Rm×n of rank r ≤ min(m,n)
can have multiple or no solutions. xmin = A†b is among all
minimizers of |Ax − b|
2
the one with the smallest norm |x |.
Vector Spaces
Linear Transformations and Matrices
Properties of Matrices
Singular Value Decomposition