CS计算机代考程序代写 chain algorithm Linear Algebra Module III: Eigenvalues and Eigenvectors

Linear Algebra Module III: Eigenvalues and Eigenvectors

Linear Algebra Module III: Eigenvalues and Eigenvectors

Summer Term 2021

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 1 / 68

Table of Contents

1 The determinant
The Determinant – Overview
Cofactor Expansion
Combinatorial Formulation
Properties of the Determinant

2 Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors
The Characteristic Polynomial
Direct Sum Decomposition
Similarity and Diagonalization
Complex Eigenvalues and Eigenvectors
Multiplicity
Shur’s Theorem and Normal Matrices
Generalized Eigenvectors

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 2 / 68

Table of Contents

1 The determinant
The Determinant – Overview
Cofactor Expansion
Combinatorial Formulation
Properties of the Determinant

2 Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors
The Characteristic Polynomial
Direct Sum Decomposition
Similarity and Diagonalization
Complex Eigenvalues and Eigenvectors
Multiplicity
Shur’s Theorem and Normal Matrices
Generalized Eigenvectors

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 3 / 68

The Determinant – Overview

The determinant of a matrix exists whenever the matrix is square, and
whenever entries can be added and multiplied, assuming the multiplication
is commutative.

Thus determinants exist for more than just matrices of numbers; for
example, a square matrix whose entries are real-valued functions has a
well-defined determinant.

There are different ways to define it. We present two, and record some of
its basic properties.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 4 / 68

The Determinant – Overview

The determinant of a matrix exists whenever the matrix is square, and
whenever entries can be added and multiplied, assuming the multiplication
is commutative.

Thus determinants exist for more than just matrices of numbers; for
example, a square matrix whose entries are real-valued functions has a
well-defined determinant.

There are different ways to define it. We present two, and record some of
its basic properties.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 4 / 68

The Determinant – Overview

The determinant of a matrix exists whenever the matrix is square, and
whenever entries can be added and multiplied, assuming the multiplication
is commutative.

Thus determinants exist for more than just matrices of numbers; for
example, a square matrix whose entries are real-valued functions has a
well-defined determinant.

There are different ways to define it. We present two, and record some of
its basic properties.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 4 / 68

Cofactor Expansion

Let A be an n × n matrix, with n > 1.

Definition of (i , j)th minor

The (i , j)th minor of A – denoted Mij(A) – is the (n− 1)× (n− 1) matrix
derived from A by deleting the i th row and jth column.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then M23(A) is the 2× 2 matrix gotten

by deleting the second row and third column: M23(A) =
[

2 3
0 −3

]
.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 5 / 68

Cofactor Expansion

Let A be an n × n matrix, with n > 1.

Definition of (i , j)th minor

The (i , j)th minor of A – denoted Mij(A) – is the (n− 1)× (n− 1) matrix
derived from A by deleting the i th row and jth column.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then M23(A) is the 2× 2 matrix gotten

by deleting the second row and third column: M23(A) =
[

2 3
0 −3

]
.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 5 / 68

Cofactor Expansion

Let A be an n × n matrix, with n > 1.

Definition of (i , j)th minor
The (i , j)th minor of A – denoted Mij(A) – is the (n− 1)× (n− 1) matrix
derived from A by deleting the i th row and jth column.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then M23(A) is the 2× 2 matrix gotten

by deleting the second row and third column: M23(A) =
[

2 3
0 −3

]
.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 5 / 68

Cofactor Expansion

Let A be an n × n matrix, with n > 1.

Definition of (i , j)th minor
The (i , j)th minor of A – denoted Mij(A) – is the (n− 1)× (n− 1) matrix
derived from A by deleting the i th row and jth column.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then M23(A) is the 2× 2 matrix gotten

by deleting the second row and third column: M23(A) =
[

2 3
0 −3

]
.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 5 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition.

Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor

For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor

For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor

For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor
For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor
For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor
For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then

Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor
For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor
For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.

For n > 1, Det(A) =
∑n

j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor
For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion
To define the determinant in the framework of cofactors, one uses an
inductive or recursive definition. Note that if we are given an n × n matrix
A, all of its minors will be of dimensions (n − 1)× (n − 1), for which we
can assume the determinant has already been defined.

Definition of (i , j)th cofactor
For indices 1 ≤ i , j ≤ n, define the (i , j)th cofactor of A to be

Aij = (−1)i+jDet(Mij(A))

Then
Definition of the Determinant using cofactors

If A = [a] is a 1× 1 matrix, then Det(A) = a.
For n > 1, Det(A) =

∑n
j=1 A(1, j)A1j

This is sometimes also referred to as cofactor expansion along the first row.Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 6 / 68

Cofactor Expansion

Examples

Suppose A =
[

2 3
1 −4

]
. Then M11(A) = [−4],M12(A) = [1], so

A11 = (−1)1+1(−4) = −4 and A12 = (−1)1+2(1) = −1.

So
Det(A) = A(1, 1)A11 + A(1, 2)A12 = (2)(−4) + (3)(−1) = −11.

Cofactor expansion applies to an arbitrary 2× 2 matrix.

Examples

Suppose A =
[

a11 a12
a21 a22

]
.

Then M11(A) = [a22],M12(A) = [a21], so

A11 = (−1)1+1(a22) = a22 and A12 = (−1)1+2(a21) = −a21. So

Det(A) = A(1, 1)A11 + A(1, 2)A12 = a11a22 − a12a21

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 7 / 68

Cofactor Expansion

Examples

Suppose A =
[

2 3
1 −4

]
. Then M11(A) = [−4],M12(A) = [1], so

A11 = (−1)1+1(−4) = −4 and A12 = (−1)1+2(1) = −1. So
Det(A) = A(1, 1)A11 + A(1, 2)A12 = (2)(−4) + (3)(−1) = −11.

Cofactor expansion applies to an arbitrary 2× 2 matrix.

Examples

Suppose A =
[

a11 a12
a21 a22

]
.

Then M11(A) = [a22],M12(A) = [a21], so

A11 = (−1)1+1(a22) = a22 and A12 = (−1)1+2(a21) = −a21. So

Det(A) = A(1, 1)A11 + A(1, 2)A12 = a11a22 − a12a21

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 7 / 68

Cofactor Expansion

Examples

Suppose A =
[

2 3
1 −4

]
. Then M11(A) = [−4],M12(A) = [1], so

A11 = (−1)1+1(−4) = −4 and A12 = (−1)1+2(1) = −1. So
Det(A) = A(1, 1)A11 + A(1, 2)A12 = (2)(−4) + (3)(−1) = −11.

Cofactor expansion applies to an arbitrary 2× 2 matrix.

Examples

Suppose A =
[

a11 a12
a21 a22

]
.

Then M11(A) = [a22],M12(A) = [a21], so

A11 = (−1)1+1(a22) = a22 and A12 = (−1)1+2(a21) = −a21. So

Det(A) = A(1, 1)A11 + A(1, 2)A12 = a11a22 − a12a21

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 7 / 68

Cofactor Expansion

Examples

Suppose A =
[

2 3
1 −4

]
. Then M11(A) = [−4],M12(A) = [1], so

A11 = (−1)1+1(−4) = −4 and A12 = (−1)1+2(1) = −1. So
Det(A) = A(1, 1)A11 + A(1, 2)A12 = (2)(−4) + (3)(−1) = −11.

Cofactor expansion applies to an arbitrary 2× 2 matrix.

Examples

Suppose A =
[

a11 a12
a21 a22

]
.

Then M11(A) = [a22],M12(A) = [a21], so

A11 = (−1)1+1(a22) = a22 and A12 = (−1)1+2(a21) = −a21. So

Det(A) = A(1, 1)A11 + A(1, 2)A12 = a11a22 − a12a21

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 7 / 68

Cofactor Expansion

Examples

Suppose A =
[

2 3
1 −4

]
. Then M11(A) = [−4],M12(A) = [1], so

A11 = (−1)1+1(−4) = −4 and A12 = (−1)1+2(1) = −1. So
Det(A) = A(1, 1)A11 + A(1, 2)A12 = (2)(−4) + (3)(−1) = −11.

Cofactor expansion applies to an arbitrary 2× 2 matrix.

Examples

Suppose A =
[

a11 a12
a21 a22

]
. Then M11(A) = [a22],M12(A) = [a21], so

A11 = (−1)1+1(a22) = a22 and A12 = (−1)1+2(a21) = −a21. So

Det(A) = A(1, 1)A11 + A(1, 2)A12 = a11a22 − a12a21

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 7 / 68

Cofactor Expansion

Examples

Suppose A =
[

2 3
1 −4

]
. Then M11(A) = [−4],M12(A) = [1], so

A11 = (−1)1+1(−4) = −4 and A12 = (−1)1+2(1) = −1. So
Det(A) = A(1, 1)A11 + A(1, 2)A12 = (2)(−4) + (3)(−1) = −11.

Cofactor expansion applies to an arbitrary 2× 2 matrix.

Examples

Suppose A =
[

a11 a12
a21 a22

]
. Then M11(A) = [a22],M12(A) = [a21], so

A11 = (−1)1+1(a22) = a22 and A12 = (−1)1+2(a21) = −a21. So

Det(A) = A(1, 1)A11 + A(1, 2)A12 = a11a22 − a12a21

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 7 / 68

Cofactor Expansion

The following gives an example of how one would use the definition above
to compute the determinant of a 3× 3 matrix.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then

M11(A) =
[
−4 5
−3 7

]
, M12(A) =

[
−2 5
0 7

]
, M13(A) =

[
−2 −4
0 −3

]
.

The corresponding cofactors are
A11 = (−1)1+1((−4) ∗ 7− 5 ∗ (−3)) = −13,

A12 =
(−1)1+2((−2)∗7−5∗0) = 14, A13 = (−1)1+3((−2)∗(−3)−(−4)∗0) = 6

.

So for this matrix Det(A) = A(1, 1)A11 + A(1, 2)A12 + A(1, 3)A13 =
2 ∗ (−13) + 3 ∗ 14 + (−1) ∗ 6 = −26 + 42− 6 = 10.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 8 / 68

Cofactor Expansion

The following gives an example of how one would use the definition above
to compute the determinant of a 3× 3 matrix.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then

M11(A) =
[
−4 5
−3 7

]
, M12(A) =

[
−2 5
0 7

]
, M13(A) =

[
−2 −4
0 −3

]
.

The corresponding cofactors are
A11 = (−1)1+1((−4) ∗ 7− 5 ∗ (−3)) = −13,

A12 =
(−1)1+2((−2)∗7−5∗0) = 14, A13 = (−1)1+3((−2)∗(−3)−(−4)∗0) = 6

.

So for this matrix Det(A) = A(1, 1)A11 + A(1, 2)A12 + A(1, 3)A13 =
2 ∗ (−13) + 3 ∗ 14 + (−1) ∗ 6 = −26 + 42− 6 = 10.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 8 / 68

Cofactor Expansion

The following gives an example of how one would use the definition above
to compute the determinant of a 3× 3 matrix.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then

M11(A) =
[
−4 5
−3 7

]
, M12(A) =

[
−2 5
0 7

]
, M13(A) =

[
−2 −4
0 −3

]
.

The corresponding cofactors are
A11 = (−1)1+1((−4) ∗ 7− 5 ∗ (−3)) = −13,

A12 =
(−1)1+2((−2)∗7−5∗0) = 14, A13 = (−1)1+3((−2)∗(−3)−(−4)∗0) = 6

.

So for this matrix Det(A) = A(1, 1)A11 + A(1, 2)A12 + A(1, 3)A13 =
2 ∗ (−13) + 3 ∗ 14 + (−1) ∗ 6 = −26 + 42− 6 = 10.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 8 / 68

Cofactor Expansion

The following gives an example of how one would use the definition above
to compute the determinant of a 3× 3 matrix.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then

M11(A) =
[
−4 5
−3 7

]
, M12(A) =

[
−2 5
0 7

]
, M13(A) =

[
−2 −4
0 −3

]
.

The corresponding cofactors are
A11 = (−1)1+1((−4) ∗ 7− 5 ∗ (−3)) = −13,

A12 =
(−1)1+2((−2)∗7−5∗0) = 14, A13 = (−1)1+3((−2)∗(−3)−(−4)∗0) = 6

.

So for this matrix Det(A) = A(1, 1)A11 + A(1, 2)A12 + A(1, 3)A13 =
2 ∗ (−13) + 3 ∗ 14 + (−1) ∗ 6 = −26 + 42− 6 = 10.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 8 / 68

Cofactor Expansion

The following gives an example of how one would use the definition above
to compute the determinant of a 3× 3 matrix.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then

M11(A) =
[
−4 5
−3 7

]
, M12(A) =

[
−2 5
0 7

]
, M13(A) =

[
−2 −4
0 −3

]
.

The corresponding cofactors are
A11 = (−1)1+1((−4) ∗ 7− 5 ∗ (−3)) = −13, A12 =
(−1)1+2((−2)∗7−5∗0) = 14,

A13 = (−1)1+3((−2)∗(−3)−(−4)∗0) = 6

.

So for this matrix Det(A) = A(1, 1)A11 + A(1, 2)A12 + A(1, 3)A13 =
2 ∗ (−13) + 3 ∗ 14 + (−1) ∗ 6 = −26 + 42− 6 = 10.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 8 / 68

Cofactor Expansion

The following gives an example of how one would use the definition above
to compute the determinant of a 3× 3 matrix.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then

M11(A) =
[
−4 5
−3 7

]
, M12(A) =

[
−2 5
0 7

]
, M13(A) =

[
−2 −4
0 −3

]
.

The corresponding cofactors are
A11 = (−1)1+1((−4) ∗ 7− 5 ∗ (−3)) = −13, A12 =
(−1)1+2((−2)∗7−5∗0) = 14, A13 = (−1)1+3((−2)∗(−3)−(−4)∗0) = 6.

So for this matrix Det(A) = A(1, 1)A11 + A(1, 2)A12 + A(1, 3)A13 =
2 ∗ (−13) + 3 ∗ 14 + (−1) ∗ 6 = −26 + 42− 6 = 10.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 8 / 68

Cofactor Expansion

The following gives an example of how one would use the definition above
to compute the determinant of a 3× 3 matrix.

Examples

Suppose A =


 2 3 −1−2 −4 5

0 −3 7


. Then

M11(A) =
[
−4 5
−3 7

]
, M12(A) =

[
−2 5
0 7

]
, M13(A) =

[
−2 −4
0 −3

]
.

The corresponding cofactors are
A11 = (−1)1+1((−4) ∗ 7− 5 ∗ (−3)) = −13, A12 =
(−1)1+2((−2)∗7−5∗0) = 14, A13 = (−1)1+3((−2)∗(−3)−(−4)∗0) = 6.

So for this matrix Det(A) = A(1, 1)A11 + A(1, 2)A12 + A(1, 3)A13 =
2 ∗ (−13) + 3 ∗ 14 + (−1) ∗ 6 = −26 + 42− 6 = 10.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 8 / 68

Combinatorial definition of the determinant

There is also a combinatorial approach to the computation of the
determinant.

Let Sn = {1, 2, . . . , n} be the set consisting of the integers between 1 and
n inclusive. Denote by Σn the set of maps σ : Sn

∼=−→ Sn of Sn to itself
which are isomorphisms, meaning that they are 1-1 and onto. An element
σ ∈ Σn should be thought of as a reordering of the elements of Sn, with
Σn consisting of all such reorderings. Elements of Σn can be multiplied
(with multiplication given by composition), and with respect to that
multiplication every element σ ∈ Σn has an inverse σ−1 ∈ Σn satisfying
σσ−1 = Id . A set satisfying these properties is called a group, and Σn is
typically referred to as the permutation group on n letters.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 9 / 68

Combinatorial definition of the determinant

There is also a combinatorial approach to the computation of the
determinant.

Let Sn = {1, 2, . . . , n} be the set consisting of the integers between 1 and
n inclusive. Denote by Σn the set of maps σ : Sn

∼=−→ Sn of Sn to itself
which are isomorphisms, meaning that they are 1-1 and onto.

An element
σ ∈ Σn should be thought of as a reordering of the elements of Sn, with
Σn consisting of all such reorderings. Elements of Σn can be multiplied
(with multiplication given by composition), and with respect to that
multiplication every element σ ∈ Σn has an inverse σ−1 ∈ Σn satisfying
σσ−1 = Id . A set satisfying these properties is called a group, and Σn is
typically referred to as the permutation group on n letters.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 9 / 68

Combinatorial definition of the determinant

There is also a combinatorial approach to the computation of the
determinant.

Let Sn = {1, 2, . . . , n} be the set consisting of the integers between 1 and
n inclusive. Denote by Σn the set of maps σ : Sn

∼=−→ Sn of Sn to itself
which are isomorphisms, meaning that they are 1-1 and onto. An element
σ ∈ Σn should be thought of as a reordering of the elements of Sn, with
Σn consisting of all such reorderings.

Elements of Σn can be multiplied
(with multiplication given by composition), and with respect to that
multiplication every element σ ∈ Σn has an inverse σ−1 ∈ Σn satisfying
σσ−1 = Id . A set satisfying these properties is called a group, and Σn is
typically referred to as the permutation group on n letters.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 9 / 68

Combinatorial definition of the determinant

There is also a combinatorial approach to the computation of the
determinant.

Let Sn = {1, 2, . . . , n} be the set consisting of the integers between 1 and
n inclusive. Denote by Σn the set of maps σ : Sn

∼=−→ Sn of Sn to itself
which are isomorphisms, meaning that they are 1-1 and onto. An element
σ ∈ Σn should be thought of as a reordering of the elements of Sn, with
Σn consisting of all such reorderings. Elements of Σn can be multiplied
(with multiplication given by composition), and with respect to that
multiplication every element σ ∈ Σn has an inverse σ−1 ∈ Σn satisfying
σσ−1 = Id .

A set satisfying these properties is called a group, and Σn is
typically referred to as the permutation group on n letters.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 9 / 68

Combinatorial definition of the determinant

There is also a combinatorial approach to the computation of the
determinant.

Let Sn = {1, 2, . . . , n} be the set consisting of the integers between 1 and
n inclusive. Denote by Σn the set of maps σ : Sn

∼=−→ Sn of Sn to itself
which are isomorphisms, meaning that they are 1-1 and onto. An element
σ ∈ Σn should be thought of as a reordering of the elements of Sn, with
Σn consisting of all such reorderings. Elements of Σn can be multiplied
(with multiplication given by composition), and with respect to that
multiplication every element σ ∈ Σn has an inverse σ−1 ∈ Σn satisfying
σσ−1 = Id . A set satisfying these properties is called a group, and Σn is
typically referred to as the permutation group on n letters.

Linear Algebra Module III: Eigenvalues and Eigenvectors Summer Term 2021 9 / 68

Combinatorial definition of the determinant

Among the elements of Σn are permutations of a particularly simple type,
called transpositions. The permutation which switches two numbers i and
j , while leaving all others fixed, is labeled τij .

It is not hard to see that any permutation σ ∈ Σn can be written as a
product of transpositions:

σ = τi1,j1τi2,j2 . . . τim,jm

The way of doing so is far from unique, but it turns out that – given σ –
the number of transpositions used to rewrite σ in this fashion is always
even or always odd. This allows for the definition of the sign of a
permutation:

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 10 / 68

Combinatorial definition of the determinant

Among the elements of Σn are permutations of a particularly simple type,
called transpositions. The permutation which switches two numbers i and
j , while leaving all others fixed, is labeled τij .

It is not hard to see that any permutation σ ∈ Σn can be written as a
product of transpositions:

σ = τi1,j1τi2,j2 . . . τim,jm

The way of doing so is far from unique, but it turns out that – given σ –
the number of transpositions used to rewrite σ in this fashion is always
even or always odd. This allows for the definition of the sign of a
permutation:

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 10 / 68

Combinatorial definition of the determinant

Among the elements of Σn are permutations of a particularly simple type,
called transpositions. The permutation which switches two numbers i and
j , while leaving all others fixed, is labeled τij .

It is not hard to see that any permutation σ ∈ Σn can be written as a
product of transpositions:

σ = τi1,j1τi2,j2 . . . τim,jm

The way of doing so is far from unique, but it turns out that – given σ –
the number of transpositions used to rewrite σ in this fashion is always
even or always odd. This allows for the definition of the sign of a
permutation:

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 10 / 68

Combinatorial definition of the determinant

Among the elements of Σn are permutations of a particularly simple type,
called transpositions. The permutation which switches two numbers i and
j , while leaving all others fixed, is labeled τij .

It is not hard to see that any permutation σ ∈ Σn can be written as a
product of transpositions:

σ = τi1,j1τi2,j2 . . . τim,jm

The way of doing so is far from unique, but it turns out that – given σ –
the number of transpositions used to rewrite σ in this fashion is always
even or always odd.

This allows for the definition of the sign of a
permutation:

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 10 / 68

Combinatorial definition of the determinant

Among the elements of Σn are permutations of a particularly simple type,
called transpositions. The permutation which switches two numbers i and
j , while leaving all others fixed, is labeled τij .

It is not hard to see that any permutation σ ∈ Σn can be written as a
product of transpositions:

σ = τi1,j1τi2,j2 . . . τim,jm

The way of doing so is far from unique, but it turns out that – given σ –
the number of transpositions used to rewrite σ in this fashion is always
even or always odd. This allows for the definition of the sign of a
permutation:

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 10 / 68

Combinatorial definition of the determinant

The sign of a permutation

For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)

For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Combinatorial definition of the determinant

The sign of a permutation
For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)

For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Combinatorial definition of the determinant

The sign of a permutation
For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)

For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Combinatorial definition of the determinant

The sign of a permutation
For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)

For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Combinatorial definition of the determinant

The sign of a permutation
For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)

For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Combinatorial definition of the determinant

The sign of a permutation
For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)

For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Combinatorial definition of the determinant

The sign of a permutation
For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)
For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Combinatorial definition of the determinant

The sign of a permutation
For σ ∈ Σn, the sign of σ, denoted by sgn(σ) is given by

sgn(σ) := (−1)m if σ = τi1,j1τi2,j2 . . . τim,jm

where the product on the right is a product of transpositions.

With this the determinant may alternatively be defined as

Combinatorial definition of Det(A)
For an n × n matrix A,

Det(A) =

σ∈Σn

sgn(σ)A(1, σ(1))A(2, σ(2)) . . .A(n, σ(n))

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 11 / 68

Properties of the determinant

Basic properties of the determinant

(Determinants commute with products) If A,B are two square
matrices of the same dimensions, then Det(A ∗ B) = Det(A)Det(B).
(Test for singularity) If A is a numerical square matrix, then A is
singular iff Det(A) = 0. Alternatively, A is invertible as a matrix iff
Det(A) is invertible as a number.
(Determinant of triangular matrices) If A is triangular (either upper or
lower), then Det(A) = A(1, 1)A(2, 2) . . .A(n, n) = the product of the
diagonal entries.
(Invariance under transposition) Det(A) = Det(AT ).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 12 / 68

Properties of the determinant

Basic properties of the determinant
(Determinants commute with products) If A,B are two square
matrices of the same dimensions, then Det(A ∗ B) = Det(A)Det(B).

(Test for singularity) If A is a numerical square matrix, then A is
singular iff Det(A) = 0. Alternatively, A is invertible as a matrix iff
Det(A) is invertible as a number.
(Determinant of triangular matrices) If A is triangular (either upper or
lower), then Det(A) = A(1, 1)A(2, 2) . . .A(n, n) = the product of the
diagonal entries.
(Invariance under transposition) Det(A) = Det(AT ).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 12 / 68

Properties of the determinant

Basic properties of the determinant
(Determinants commute with products) If A,B are two square
matrices of the same dimensions, then Det(A ∗ B) = Det(A)Det(B).
(Test for singularity) If A is a numerical square matrix, then A is
singular iff Det(A) = 0. Alternatively, A is invertible as a matrix iff
Det(A) is invertible as a number.

(Determinant of triangular matrices) If A is triangular (either upper or
lower), then Det(A) = A(1, 1)A(2, 2) . . .A(n, n) = the product of the
diagonal entries.
(Invariance under transposition) Det(A) = Det(AT ).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 12 / 68

Properties of the determinant

Basic properties of the determinant
(Determinants commute with products) If A,B are two square
matrices of the same dimensions, then Det(A ∗ B) = Det(A)Det(B).
(Test for singularity) If A is a numerical square matrix, then A is
singular iff Det(A) = 0. Alternatively, A is invertible as a matrix iff
Det(A) is invertible as a number.
(Determinant of triangular matrices) If A is triangular (either upper or
lower), then Det(A) = A(1, 1)A(2, 2) . . .A(n, n) = the product of the
diagonal entries.

(Invariance under transposition) Det(A) = Det(AT ).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 12 / 68

Properties of the determinant

Basic properties of the determinant
(Determinants commute with products) If A,B are two square
matrices of the same dimensions, then Det(A ∗ B) = Det(A)Det(B).
(Test for singularity) If A is a numerical square matrix, then A is
singular iff Det(A) = 0. Alternatively, A is invertible as a matrix iff
Det(A) is invertible as a number.
(Determinant of triangular matrices) If A is triangular (either upper or
lower), then Det(A) = A(1, 1)A(2, 2) . . .A(n, n) = the product of the
diagonal entries.
(Invariance under transposition) Det(A) = Det(AT ).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 12 / 68

Properties of the determinant

Determinants of elementary matrices

For any type I elementary matrix Pij , Det(Pij) = −1.
For any type II elementary matrix Di (r), Det(Di (r)) = r .
For any type III elementary matrix Eij(a), Det(Eij(a)) = 1.

Cofactor expansions along rows or columns
If A is an n × n matrix, then

(Rows) For any 1 ≤ i ≤ n, Det(A) =
∑n

j=1 A(i , j)Aij . Moreover, if
1 ≤ i 6= k ≤ n, then

∑n
j=1 A(i , j)Akj = 0.

(Columns) For any 1 ≤ j ≤ n, Det(A) =
∑n

i=1 A(i , j)Aij . Moreover, if
1 ≤ j 6= k ≤ n, then

∑n
i=1 A(i , j)Aik = 0.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 13 / 68

Properties of the determinant

Determinants of elementary matrices
For any type I elementary matrix Pij , Det(Pij) = −1.

For any type II elementary matrix Di (r), Det(Di (r)) = r .
For any type III elementary matrix Eij(a), Det(Eij(a)) = 1.

Cofactor expansions along rows or columns
If A is an n × n matrix, then

(Rows) For any 1 ≤ i ≤ n, Det(A) =
∑n

j=1 A(i , j)Aij . Moreover, if
1 ≤ i 6= k ≤ n, then

∑n
j=1 A(i , j)Akj = 0.

(Columns) For any 1 ≤ j ≤ n, Det(A) =
∑n

i=1 A(i , j)Aij . Moreover, if
1 ≤ j 6= k ≤ n, then

∑n
i=1 A(i , j)Aik = 0.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 13 / 68

Properties of the determinant

Determinants of elementary matrices
For any type I elementary matrix Pij , Det(Pij) = −1.
For any type II elementary matrix Di (r), Det(Di (r)) = r .

For any type III elementary matrix Eij(a), Det(Eij(a)) = 1.

Cofactor expansions along rows or columns
If A is an n × n matrix, then

(Rows) For any 1 ≤ i ≤ n, Det(A) =
∑n

j=1 A(i , j)Aij . Moreover, if
1 ≤ i 6= k ≤ n, then

∑n
j=1 A(i , j)Akj = 0.

(Columns) For any 1 ≤ j ≤ n, Det(A) =
∑n

i=1 A(i , j)Aij . Moreover, if
1 ≤ j 6= k ≤ n, then

∑n
i=1 A(i , j)Aik = 0.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 13 / 68

Properties of the determinant

Determinants of elementary matrices
For any type I elementary matrix Pij , Det(Pij) = −1.
For any type II elementary matrix Di (r), Det(Di (r)) = r .
For any type III elementary matrix Eij(a), Det(Eij(a)) = 1.

Cofactor expansions along rows or columns
If A is an n × n matrix, then

(Rows) For any 1 ≤ i ≤ n, Det(A) =
∑n

j=1 A(i , j)Aij . Moreover, if
1 ≤ i 6= k ≤ n, then

∑n
j=1 A(i , j)Akj = 0.

(Columns) For any 1 ≤ j ≤ n, Det(A) =
∑n

i=1 A(i , j)Aij . Moreover, if
1 ≤ j 6= k ≤ n, then

∑n
i=1 A(i , j)Aik = 0.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 13 / 68

Properties of the determinant

Determinants of elementary matrices
For any type I elementary matrix Pij , Det(Pij) = −1.
For any type II elementary matrix Di (r), Det(Di (r)) = r .
For any type III elementary matrix Eij(a), Det(Eij(a)) = 1.

Cofactor expansions along rows or columns
If A is an n × n matrix, then

(Rows) For any 1 ≤ i ≤ n, Det(A) =
∑n

j=1 A(i , j)Aij . Moreover, if
1 ≤ i 6= k ≤ n, then

∑n
j=1 A(i , j)Akj = 0.

(Columns) For any 1 ≤ j ≤ n, Det(A) =
∑n

i=1 A(i , j)Aij . Moreover, if
1 ≤ j 6= k ≤ n, then

∑n
i=1 A(i , j)Aik = 0.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 13 / 68

Properties of the determinant

Determinants of elementary matrices
For any type I elementary matrix Pij , Det(Pij) = −1.
For any type II elementary matrix Di (r), Det(Di (r)) = r .
For any type III elementary matrix Eij(a), Det(Eij(a)) = 1.

Cofactor expansions along rows or columns
If A is an n × n matrix, then

(Rows) For any 1 ≤ i ≤ n, Det(A) =
∑n

j=1 A(i , j)Aij . Moreover, if
1 ≤ i 6= k ≤ n, then

∑n
j=1 A(i , j)Akj = 0.

(Columns) For any 1 ≤ j ≤ n, Det(A) =
∑n

i=1 A(i , j)Aij . Moreover, if
1 ≤ j 6= k ≤ n, then

∑n
i=1 A(i , j)Aik = 0.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 13 / 68

Properties of the determinant

Determinants of elementary matrices
For any type I elementary matrix Pij , Det(Pij) = −1.
For any type II elementary matrix Di (r), Det(Di (r)) = r .
For any type III elementary matrix Eij(a), Det(Eij(a)) = 1.

Cofactor expansions along rows or columns
If A is an n × n matrix, then

(Rows) For any 1 ≤ i ≤ n, Det(A) =
∑n

j=1 A(i , j)Aij . Moreover, if
1 ≤ i 6= k ≤ n, then

∑n
j=1 A(i , j)Akj = 0.

(Columns) For any 1 ≤ j ≤ n, Det(A) =
∑n

i=1 A(i , j)Aij . Moreover, if
1 ≤ j 6= k ≤ n, then

∑n
i=1 A(i , j)Aik = 0.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 13 / 68

Properties of the determinant

These properties can be used to produce algorithms for computing the
determinant of a numerical matrix that are considerably more efficient
than direct application of the definition (either using cofactor expansion
along the first row, or the combinatorial sum over all permutations).

We will say that two matrices of the same dimension are type III row
equivalent if one can be derived from the other by using only type III row
operations, and are type I-III row equivalent if one can be derived from the
other by using only type I and type III row operations.

Theorem
If A is an n × n numerical matrix, then A is I-III row equivalent to an
upper triangular matrix.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 14 / 68

Properties of the determinant

These properties can be used to produce algorithms for computing the
determinant of a numerical matrix that are considerably more efficient
than direct application of the definition (either using cofactor expansion
along the first row, or the combinatorial sum over all permutations).

We will say that two matrices of the same dimension are type III row
equivalent if one can be derived from the other by using only type III row
operations, and are type I-III row equivalent if one can be derived from the
other by using only type I and type III row operations.

Theorem
If A is an n × n numerical matrix, then A is I-III row equivalent to an
upper triangular matrix.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 14 / 68

Properties of the determinant

These properties can be used to produce algorithms for computing the
determinant of a numerical matrix that are considerably more efficient
than direct application of the definition (either using cofactor expansion
along the first row, or the combinatorial sum over all permutations).

We will say that two matrices of the same dimension are type III row
equivalent if one can be derived from the other by using only type III row
operations, and are type I-III row equivalent if one can be derived from the
other by using only type I and type III row operations.

Theorem
If A is an n × n numerical matrix, then A is I-III row equivalent to an
upper triangular matrix.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 14 / 68

Properties of the determinant
A useful consequence of this theorem is the following algorithm for
computing the determinant of a purely numerical matrix A:

Using type I and III row operations, convert A into an upper
triangular matrix B.
Compute Det(B) as the product of its diagonal entries.
Count the number of type I operations you used in going from A to
B. If it was an even number then Det(A) = Det(B). If it was an odd
number then Det(A) = −Det(B).

In practice the above procedure computes Det(A) much more quickly than
brute force application of either definition, and is used in many
applications. However, it is only useful in the case A is purely numerical
(all of its entries are either real or complex numbers). For more general
matrices that have non-numerical entries (for example, real-valued
functions), Theorem 1 above no longer holds, and more sophisticated
methods are needed for effectively computing the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 15 / 68

Properties of the determinant
A useful consequence of this theorem is the following algorithm for
computing the determinant of a purely numerical matrix A:

Using type I and III row operations, convert A into an upper
triangular matrix B.

Compute Det(B) as the product of its diagonal entries.
Count the number of type I operations you used in going from A to
B. If it was an even number then Det(A) = Det(B). If it was an odd
number then Det(A) = −Det(B).

In practice the above procedure computes Det(A) much more quickly than
brute force application of either definition, and is used in many
applications. However, it is only useful in the case A is purely numerical
(all of its entries are either real or complex numbers). For more general
matrices that have non-numerical entries (for example, real-valued
functions), Theorem 1 above no longer holds, and more sophisticated
methods are needed for effectively computing the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 15 / 68

Properties of the determinant
A useful consequence of this theorem is the following algorithm for
computing the determinant of a purely numerical matrix A:

Using type I and III row operations, convert A into an upper
triangular matrix B.
Compute Det(B) as the product of its diagonal entries.

Count the number of type I operations you used in going from A to
B. If it was an even number then Det(A) = Det(B). If it was an odd
number then Det(A) = −Det(B).

In practice the above procedure computes Det(A) much more quickly than
brute force application of either definition, and is used in many
applications. However, it is only useful in the case A is purely numerical
(all of its entries are either real or complex numbers). For more general
matrices that have non-numerical entries (for example, real-valued
functions), Theorem 1 above no longer holds, and more sophisticated
methods are needed for effectively computing the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 15 / 68

Properties of the determinant
A useful consequence of this theorem is the following algorithm for
computing the determinant of a purely numerical matrix A:

Using type I and III row operations, convert A into an upper
triangular matrix B.
Compute Det(B) as the product of its diagonal entries.
Count the number of type I operations you used in going from A to
B. If it was an even number then Det(A) = Det(B). If it was an odd
number then Det(A) = −Det(B).

In practice the above procedure computes Det(A) much more quickly than
brute force application of either definition, and is used in many
applications. However, it is only useful in the case A is purely numerical
(all of its entries are either real or complex numbers). For more general
matrices that have non-numerical entries (for example, real-valued
functions), Theorem 1 above no longer holds, and more sophisticated
methods are needed for effectively computing the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 15 / 68

Properties of the determinant
A useful consequence of this theorem is the following algorithm for
computing the determinant of a purely numerical matrix A:

Using type I and III row operations, convert A into an upper
triangular matrix B.
Compute Det(B) as the product of its diagonal entries.
Count the number of type I operations you used in going from A to
B. If it was an even number then Det(A) = Det(B). If it was an odd
number then Det(A) = −Det(B).

In practice the above procedure computes Det(A) much more quickly than
brute force application of either definition, and is used in many
applications.

However, it is only useful in the case A is purely numerical
(all of its entries are either real or complex numbers). For more general
matrices that have non-numerical entries (for example, real-valued
functions), Theorem 1 above no longer holds, and more sophisticated
methods are needed for effectively computing the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 15 / 68

Properties of the determinant
A useful consequence of this theorem is the following algorithm for
computing the determinant of a purely numerical matrix A:

Using type I and III row operations, convert A into an upper
triangular matrix B.
Compute Det(B) as the product of its diagonal entries.
Count the number of type I operations you used in going from A to
B. If it was an even number then Det(A) = Det(B). If it was an odd
number then Det(A) = −Det(B).

In practice the above procedure computes Det(A) much more quickly than
brute force application of either definition, and is used in many
applications. However, it is only useful in the case A is purely numerical
(all of its entries are either real or complex numbers).

For more general
matrices that have non-numerical entries (for example, real-valued
functions), Theorem 1 above no longer holds, and more sophisticated
methods are needed for effectively computing the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 15 / 68

Properties of the determinant
A useful consequence of this theorem is the following algorithm for
computing the determinant of a purely numerical matrix A:

Using type I and III row operations, convert A into an upper
triangular matrix B.
Compute Det(B) as the product of its diagonal entries.
Count the number of type I operations you used in going from A to
B. If it was an even number then Det(A) = Det(B). If it was an odd
number then Det(A) = −Det(B).

In practice the above procedure computes Det(A) much more quickly than
brute force application of either definition, and is used in many
applications. However, it is only useful in the case A is purely numerical
(all of its entries are either real or complex numbers). For more general
matrices that have non-numerical entries (for example, real-valued
functions), Theorem 1 above no longer holds, and more sophisticated
methods are needed for effectively computing the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 15 / 68

Table of Contents

1 The determinant
The Determinant – Overview
Cofactor Expansion
Combinatorial Formulation
Properties of the Determinant

2 Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors
The Characteristic Polynomial
Direct Sum Decomposition
Similarity and Diagonalization
Complex Eigenvalues and Eigenvectors
Multiplicity
Shur’s Theorem and Normal Matrices
Generalized Eigenvectors

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 16 / 68

Eigenvalues and Eigenvectors – definition

Eigenvalues and eigenvectors of a matrix – Definition

If A is an m × n matrix, v an n × 1 non-zero vector, we say that v is an
eigenvector of A with eigenvalue λ if one has the equality

A ∗ v = λv

In other words, if multiplying v on the left by the matrix A has the same
effect as multiplying it by the scalar λ. We note first that, in order for this
to be at all possible, A ∗ v must also be an n × 1 vector; in other words, A
must be a square matrix.

Given a square matrix, then, the eigenvalue problem is to find a complete
description of the eigenvalues and associated eigenvectors for that matrix.
In this section we will present a systematic way of doing this.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 17 / 68

Eigenvalues and Eigenvectors – definition

Eigenvalues and eigenvectors of a matrix – Definition
If A is an m × n matrix, v an n × 1 non-zero vector, we say that v is an
eigenvector of A with eigenvalue λ if one has the equality

A ∗ v = λv

In other words, if multiplying v on the left by the matrix A has the same
effect as multiplying it by the scalar λ. We note first that, in order for this
to be at all possible, A ∗ v must also be an n × 1 vector; in other words, A
must be a square matrix.

Given a square matrix, then, the eigenvalue problem is to find a complete
description of the eigenvalues and associated eigenvectors for that matrix.
In this section we will present a systematic way of doing this.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 17 / 68

Eigenvalues and Eigenvectors – definition

Eigenvalues and eigenvectors of a matrix – Definition
If A is an m × n matrix, v an n × 1 non-zero vector, we say that v is an
eigenvector of A with eigenvalue λ if one has the equality

A ∗ v = λv

In other words, if multiplying v on the left by the matrix A has the same
effect as multiplying it by the scalar λ. We note first that, in order for this
to be at all possible, A ∗ v must also be an n × 1 vector; in other words, A
must be a square matrix.

Given a square matrix, then, the eigenvalue problem is to find a complete
description of the eigenvalues and associated eigenvectors for that matrix.
In this section we will present a systematic way of doing this.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 17 / 68

Eigenvalues and Eigenvectors – definition

Eigenvalues and eigenvectors of a matrix – Definition
If A is an m × n matrix, v an n × 1 non-zero vector, we say that v is an
eigenvector of A with eigenvalue λ if one has the equality

A ∗ v = λv

In other words, if multiplying v on the left by the matrix A has the same
effect as multiplying it by the scalar λ.

We note first that, in order for this
to be at all possible, A ∗ v must also be an n × 1 vector; in other words, A
must be a square matrix.

Given a square matrix, then, the eigenvalue problem is to find a complete
description of the eigenvalues and associated eigenvectors for that matrix.
In this section we will present a systematic way of doing this.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 17 / 68

Eigenvalues and Eigenvectors – definition

Eigenvalues and eigenvectors of a matrix – Definition
If A is an m × n matrix, v an n × 1 non-zero vector, we say that v is an
eigenvector of A with eigenvalue λ if one has the equality

A ∗ v = λv

In other words, if multiplying v on the left by the matrix A has the same
effect as multiplying it by the scalar λ. We note first that, in order for this
to be at all possible, A ∗ v must also be an n × 1 vector; in other words, A
must be a square matrix.

Given a square matrix, then, the eigenvalue problem is to find a complete
description of the eigenvalues and associated eigenvectors for that matrix.
In this section we will present a systematic way of doing this.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 17 / 68

Eigenvalues and Eigenvectors – definition

Eigenvalues and eigenvectors of a matrix – Definition
If A is an m × n matrix, v an n × 1 non-zero vector, we say that v is an
eigenvector of A with eigenvalue λ if one has the equality

A ∗ v = λv

In other words, if multiplying v on the left by the matrix A has the same
effect as multiplying it by the scalar λ. We note first that, in order for this
to be at all possible, A ∗ v must also be an n × 1 vector; in other words, A
must be a square matrix.

Given a square matrix, then, the eigenvalue problem is to find a complete
description of the eigenvalues and associated eigenvectors for that matrix.

In this section we will present a systematic way of doing this.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 17 / 68

Eigenvalues and Eigenvectors – definition

Eigenvalues and eigenvectors of a matrix – Definition
If A is an m × n matrix, v an n × 1 non-zero vector, we say that v is an
eigenvector of A with eigenvalue λ if one has the equality

A ∗ v = λv

In other words, if multiplying v on the left by the matrix A has the same
effect as multiplying it by the scalar λ. We note first that, in order for this
to be at all possible, A ∗ v must also be an n × 1 vector; in other words, A
must be a square matrix.

Given a square matrix, then, the eigenvalue problem is to find a complete
description of the eigenvalues and associated eigenvectors for that matrix.
In this section we will present a systematic way of doing this.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 17 / 68

Eigenvalues and Eigenvectors – definition

Before proceeding with examples, we note that

Theorem
If v is an eigenvector of a matrix A, the eigenvalue associated with it is
unique.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 18 / 68

Eigenvalues and Eigenvectors – definition

Before proceeding with examples, we note that

Theorem
If v is an eigenvector of a matrix A, the eigenvalue associated with it is
unique.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 18 / 68

Eigenvalues and Eigenvectors – Examples

Examples
Let A = In×n be the n × n identity matrix. Then for any v ∈ Rn, one has
A ∗ v = I ∗ v = v = 1v. So in this case, we see that every non-zero vector
in Rn is an eigenvector of A with corresponding eigenvalue 1.

Examples

Let B =
[

2 0
0 3

]
. Then B ∗ e1 = 2e1, and B ∗ e2 = 3e2. Here the two

standard basis vectors of R2 are eigenvectors of B, but with different
eigenvalues. Note that, unlike the matrix A in the previous example, not
every vector in R2 is an eigenvector of B. In particular, if we let

v = e1 + e2 =
[

1
1

]
, then B ∗ v =

[
2
3

]
, which cannot be written as a scalar

multiple of v.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 19 / 68

Eigenvalues and Eigenvectors – Examples

Examples
Let A = In×n be the n × n identity matrix. Then for any v ∈ Rn, one has
A ∗ v = I ∗ v = v = 1v. So in this case, we see that every non-zero vector
in Rn is an eigenvector of A with corresponding eigenvalue 1.

Examples

Let B =
[

2 0
0 3

]
. Then B ∗ e1 = 2e1, and B ∗ e2 = 3e2. Here the two

standard basis vectors of R2 are eigenvectors of B, but with different
eigenvalues. Note that, unlike the matrix A in the previous example, not
every vector in R2 is an eigenvector of B. In particular, if we let

v = e1 + e2 =
[

1
1

]
, then B ∗ v =

[
2
3

]
, which cannot be written as a scalar

multiple of v.
Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 19 / 68

Eigenvalues and Eigenvectors – Alternative Description
So how does one go about finding the eigenvalues of a matrix? In order to
understand more clearly what it is we are looking for, we consider a
reformulation of the defining equation above.

First, we note that the
scalar product λv can be rewritten as a matrix product

λv = (λI) ∗ v

From this we have the following equivalent statements:

A∗v = λv ⇔ A∗v = (λI)∗v ⇔ (A−λI)∗v = 0 ⇔ v ∈ N(A−λI)

Thus

Observation

A non-zero vector v is an eigenvector of A with eigenvalue λ if and only if
it lies in the nullspace of the matrix A− λI. In particular, in order for such
an vector to exist, the matrix A− λI must be singular.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 20 / 68

Eigenvalues and Eigenvectors – Alternative Description
So how does one go about finding the eigenvalues of a matrix? In order to
understand more clearly what it is we are looking for, we consider a
reformulation of the defining equation above. First, we note that the
scalar product λv can be rewritten as a matrix product

λv = (λI) ∗ v

From this we have the following equivalent statements:

A∗v = λv ⇔ A∗v = (λI)∗v ⇔ (A−λI)∗v = 0 ⇔ v ∈ N(A−λI)

Thus

Observation

A non-zero vector v is an eigenvector of A with eigenvalue λ if and only if
it lies in the nullspace of the matrix A− λI. In particular, in order for such
an vector to exist, the matrix A− λI must be singular.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 20 / 68

Eigenvalues and Eigenvectors – Alternative Description
So how does one go about finding the eigenvalues of a matrix? In order to
understand more clearly what it is we are looking for, we consider a
reformulation of the defining equation above. First, we note that the
scalar product λv can be rewritten as a matrix product

λv = (λI) ∗ v

From this we have the following equivalent statements:

A∗v = λv ⇔ A∗v = (λI)∗v ⇔ (A−λI)∗v = 0 ⇔ v ∈ N(A−λI)

Thus

Observation

A non-zero vector v is an eigenvector of A with eigenvalue λ if and only if
it lies in the nullspace of the matrix A− λI. In particular, in order for such
an vector to exist, the matrix A− λI must be singular.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 20 / 68

Eigenvalues and Eigenvectors – Alternative Description
So how does one go about finding the eigenvalues of a matrix? In order to
understand more clearly what it is we are looking for, we consider a
reformulation of the defining equation above. First, we note that the
scalar product λv can be rewritten as a matrix product

λv = (λI) ∗ v

From this we have the following equivalent statements:

A∗v = λv ⇔ A∗v = (λI)∗v ⇔ (A−λI)∗v = 0 ⇔ v ∈ N(A−λI)

Thus

Observation

A non-zero vector v is an eigenvector of A with eigenvalue λ if and only if
it lies in the nullspace of the matrix A− λI. In particular, in order for such
an vector to exist, the matrix A− λI must be singular.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 20 / 68

Eigenvalues and Eigenvectors – Alternative Description
So how does one go about finding the eigenvalues of a matrix? In order to
understand more clearly what it is we are looking for, we consider a
reformulation of the defining equation above. First, we note that the
scalar product λv can be rewritten as a matrix product

λv = (λI) ∗ v

From this we have the following equivalent statements:

A∗v = λv ⇔ A∗v = (λI)∗v ⇔ (A−λI)∗v = 0 ⇔ v ∈ N(A−λI)

Thus

Observation

A non-zero vector v is an eigenvector of A with eigenvalue λ if and only if
it lies in the nullspace of the matrix A− λI. In particular, in order for such
an vector to exist, the matrix A− λI must be singular.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 20 / 68

Eigenvalues and Eigenvectors – Alternative Description
So how does one go about finding the eigenvalues of a matrix? In order to
understand more clearly what it is we are looking for, we consider a
reformulation of the defining equation above. First, we note that the
scalar product λv can be rewritten as a matrix product

λv = (λI) ∗ v

From this we have the following equivalent statements:

A∗v = λv ⇔ A∗v = (λI)∗v ⇔ (A−λI)∗v = 0 ⇔ v ∈ N(A−λI)

Thus

Observation

A non-zero vector v is an eigenvector of A with eigenvalue λ if and only if
it lies in the nullspace of the matrix A− λI. In particular, in order for such
an vector to exist, the matrix A− λI must be singular.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 20 / 68

Eigenvalues and Eigenvectors – Alternative Description
So how does one go about finding the eigenvalues of a matrix? In order to
understand more clearly what it is we are looking for, we consider a
reformulation of the defining equation above. First, we note that the
scalar product λv can be rewritten as a matrix product

λv = (λI) ∗ v

From this we have the following equivalent statements:

A∗v = λv ⇔ A∗v = (λI)∗v ⇔ (A−λI)∗v = 0 ⇔ v ∈ N(A−λI)

Thus

Observation

A non-zero vector v is an eigenvector of A with eigenvalue λ if and only if
it lies in the nullspace of the matrix A− λI. In particular, in order for such
an vector to exist, the matrix A− λI must be singular.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 20 / 68

Eigenvalues and Eigenvectors – Alternative Despcription

This suggests that in order to solve the eigenvalue problem for a matrix A,
we should first determine the values λ for which A− λI is singular; then,
for each such λ, determine a basis for the nullspace of A− λI.

In other words, eigenvalues first, eigenvectors second. And to identify the
values λ for which A− λI is singular, we will use the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 21 / 68

Eigenvalues and Eigenvectors – Alternative Despcription

This suggests that in order to solve the eigenvalue problem for a matrix A,
we should first determine the values λ for which A− λI is singular; then,
for each such λ, determine a basis for the nullspace of A− λI.

In other words, eigenvalues first, eigenvectors second. And to identify the
values λ for which A− λI is singular, we will use the determinant.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 21 / 68

Eigenvalues and Eigenvectors – Definition for Linear
Transformations

The above discussion for computing eigenvalues of square matrices can be
easily extended to linear transformations.

Definition
If L : V → V is a linear transformation, we say 0 6= v ∈ V is an
eigenvector of L with eigenvalue λ iff L(v) = λv.

Note that this definition is basis-free; it does not require the use of a
particular basis. However, suppose we are given a basis S = {v1, . . . , vn}
for V . Let A = SLS be the matrix representation of L with respect to S in
both the domain and range, so that for all v ∈ V one has

SL(v) = A ∗ Sv

Then

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 22 / 68

Eigenvalues and Eigenvectors – Definition for Linear
Transformations

The above discussion for computing eigenvalues of square matrices can be
easily extended to linear transformations.

Definition
If L : V → V is a linear transformation, we say 0 6= v ∈ V is an
eigenvector of L with eigenvalue λ iff L(v) = λv.

Note that this definition is basis-free; it does not require the use of a
particular basis. However, suppose we are given a basis S = {v1, . . . , vn}
for V . Let A = SLS be the matrix representation of L with respect to S in
both the domain and range, so that for all v ∈ V one has

SL(v) = A ∗ Sv

Then

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 22 / 68

Eigenvalues and Eigenvectors – Definition for Linear
Transformations

The above discussion for computing eigenvalues of square matrices can be
easily extended to linear transformations.

Definition
If L : V → V is a linear transformation, we say 0 6= v ∈ V is an
eigenvector of L with eigenvalue λ iff L(v) = λv.

Note that this definition is basis-free; it does not require the use of a
particular basis.

However, suppose we are given a basis S = {v1, . . . , vn}
for V . Let A = SLS be the matrix representation of L with respect to S in
both the domain and range, so that for all v ∈ V one has

SL(v) = A ∗ Sv

Then

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 22 / 68

Eigenvalues and Eigenvectors – Definition for Linear
Transformations

The above discussion for computing eigenvalues of square matrices can be
easily extended to linear transformations.

Definition
If L : V → V is a linear transformation, we say 0 6= v ∈ V is an
eigenvector of L with eigenvalue λ iff L(v) = λv.

Note that this definition is basis-free; it does not require the use of a
particular basis. However, suppose we are given a basis S = {v1, . . . , vn}
for V . Let A = SLS be the matrix representation of L with respect to S in
both the domain and range, so that for all v ∈ V one has

SL(v) = A ∗ Sv

Then

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 22 / 68

Eigenvalues and Eigenvectors – Definition for Linear
Transformations

The above discussion for computing eigenvalues of square matrices can be
easily extended to linear transformations.

Definition
If L : V → V is a linear transformation, we say 0 6= v ∈ V is an
eigenvector of L with eigenvalue λ iff L(v) = λv.

Note that this definition is basis-free; it does not require the use of a
particular basis. However, suppose we are given a basis S = {v1, . . . , vn}
for V . Let A = SLS be the matrix representation of L with respect to S in
both the domain and range, so that for all v ∈ V one has

SL(v) = A ∗ Sv

Then

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 22 / 68

Eigenvalues and Eigenvectors – Definition for Linear
Transformations

The above discussion for computing eigenvalues of square matrices can be
easily extended to linear transformations.

Definition
If L : V → V is a linear transformation, we say 0 6= v ∈ V is an
eigenvector of L with eigenvalue λ iff L(v) = λv.

Note that this definition is basis-free; it does not require the use of a
particular basis. However, suppose we are given a basis S = {v1, . . . , vn}
for V . Let A = SLS be the matrix representation of L with respect to S in
both the domain and range, so that for all v ∈ V one has

SL(v) = A ∗ Sv

Then

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 22 / 68

Eigenvalues and Eigenvectors – Connection

Proposition
v is an eigenvector for L with eigenvalue λ precisely when the coordinate
vector Sv is an eigenvector for A with eigenvalue λ.

In this way, fixing a basis for V allows us to determine the eigenvalues and
eigenvectors for L by doing the same for the matrix representation A of L.

We will investigate this further in the following sections.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 23 / 68

Eigenvalues and Eigenvectors – Connection

Proposition
v is an eigenvector for L with eigenvalue λ precisely when the coordinate
vector Sv is an eigenvector for A with eigenvalue λ.

In this way, fixing a basis for V allows us to determine the eigenvalues and
eigenvectors for L by doing the same for the matrix representation A of L.

We will investigate this further in the following sections.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 23 / 68

Eigenvalues and Eigenvectors – Connection

Proposition
v is an eigenvector for L with eigenvalue λ precisely when the coordinate
vector Sv is an eigenvector for A with eigenvalue λ.

In this way, fixing a basis for V allows us to determine the eigenvalues and
eigenvectors for L by doing the same for the matrix representation A of L.

We will investigate this further in the following sections.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 23 / 68

Eigenspaces – Definition

Let A be a real n × n matrix. As we saw above, λ is an eigenvalue of A iff
N(A− λI) 6= 0, with the non-zero vectors in this nullspace comprising the
set of eigenvectors of A with eigenvalue λ.

Eigenspace of a matrix – definition
The eigenspace of A corresponding to an eigenvalue λ is
Eλ(A) := N(A− λI) ⊂ Rn.

Note that the dimension of the eigenspace corresponding to a given
eigenvalue must be at least 1, since eigenspaces must contain non-zero
vectors by definition.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 24 / 68

Eigenspaces – Definition

Let A be a real n × n matrix. As we saw above, λ is an eigenvalue of A iff
N(A− λI) 6= 0, with the non-zero vectors in this nullspace comprising the
set of eigenvectors of A with eigenvalue λ.

Eigenspace of a matrix – definition
The eigenspace of A corresponding to an eigenvalue λ is
Eλ(A) := N(A− λI) ⊂ Rn.

Note that the dimension of the eigenspace corresponding to a given
eigenvalue must be at least 1, since eigenspaces must contain non-zero
vectors by definition.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 24 / 68

Eigenspaces – Definition

Let A be a real n × n matrix. As we saw above, λ is an eigenvalue of A iff
N(A− λI) 6= 0, with the non-zero vectors in this nullspace comprising the
set of eigenvectors of A with eigenvalue λ.

Eigenspace of a matrix – definition
The eigenspace of A corresponding to an eigenvalue λ is
Eλ(A) := N(A− λI) ⊂ Rn.

Note that the dimension of the eigenspace corresponding to a given
eigenvalue must be at least 1, since eigenspaces must contain non-zero
vectors by definition.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 24 / 68

Eigenspaces – Definition

More generally,

Eigenspace of a linear transformation – definition
If L : V → V is a linear transformation, and λ is an eigenvalue of L, then
the eigenspace of L corresponding to λ is

Eλ(L) := ker(L− λIdV ) ⊂ V

Theorem
For each eigenvalue λ of L, Eλ(L) is a subspace of V .

Moreover, if S is a
basis for V and A = SLS is the representation of L with respect to S (in
both domain and range), then v ∈ Eλ(L) iff Sv ∈ Eλ(A).

In this way, fixing a basis for V identifies the eigenspaces of L (in V ) with
the eigenspaces of the matrix representation A of L (in Rn).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 25 / 68

Eigenspaces – Definition

More generally,

Eigenspace of a linear transformation – definition
If L : V → V is a linear transformation, and λ is an eigenvalue of L, then
the eigenspace of L corresponding to λ is

Eλ(L) := ker(L− λIdV ) ⊂ V

Theorem
For each eigenvalue λ of L, Eλ(L) is a subspace of V .

Moreover, if S is a
basis for V and A = SLS is the representation of L with respect to S (in
both domain and range), then v ∈ Eλ(L) iff Sv ∈ Eλ(A).

In this way, fixing a basis for V identifies the eigenspaces of L (in V ) with
the eigenspaces of the matrix representation A of L (in Rn).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 25 / 68

Eigenspaces – Definition

More generally,

Eigenspace of a linear transformation – definition
If L : V → V is a linear transformation, and λ is an eigenvalue of L, then
the eigenspace of L corresponding to λ is

Eλ(L) := ker(L− λIdV ) ⊂ V

Theorem
For each eigenvalue λ of L, Eλ(L) is a subspace of V .

Moreover, if S is a
basis for V and A = SLS is the representation of L with respect to S (in
both domain and range), then v ∈ Eλ(L) iff Sv ∈ Eλ(A).

In this way, fixing a basis for V identifies the eigenspaces of L (in V ) with
the eigenspaces of the matrix representation A of L (in Rn).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 25 / 68

Eigenspaces – Definition

More generally,

Eigenspace of a linear transformation – definition
If L : V → V is a linear transformation, and λ is an eigenvalue of L, then
the eigenspace of L corresponding to λ is

Eλ(L) := ker(L− λIdV ) ⊂ V

Theorem
For each eigenvalue λ of L, Eλ(L) is a subspace of V .

Moreover, if S is a
basis for V and A = SLS is the representation of L with respect to S (in
both domain and range), then v ∈ Eλ(L) iff Sv ∈ Eλ(A).

In this way, fixing a basis for V identifies the eigenspaces of L (in V ) with
the eigenspaces of the matrix representation A of L (in Rn).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 25 / 68

Eigenspaces – Definition

More generally,

Eigenspace of a linear transformation – definition
If L : V → V is a linear transformation, and λ is an eigenvalue of L, then
the eigenspace of L corresponding to λ is

Eλ(L) := ker(L− λIdV ) ⊂ V

Theorem
For each eigenvalue λ of L, Eλ(L) is a subspace of V . Moreover, if S is a
basis for V and A = SLS is the representation of L with respect to S (in
both domain and range), then v ∈ Eλ(L) iff Sv ∈ Eλ(A).

In this way, fixing a basis for V identifies the eigenspaces of L (in V ) with
the eigenspaces of the matrix representation A of L (in Rn).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 25 / 68

Eigenspaces – Definition

More generally,

Eigenspace of a linear transformation – definition
If L : V → V is a linear transformation, and λ is an eigenvalue of L, then
the eigenspace of L corresponding to λ is

Eλ(L) := ker(L− λIdV ) ⊂ V

Theorem
For each eigenvalue λ of L, Eλ(L) is a subspace of V . Moreover, if S is a
basis for V and A = SLS is the representation of L with respect to S (in
both domain and range), then v ∈ Eλ(L) iff Sv ∈ Eλ(A).

In this way, fixing a basis for V identifies the eigenspaces of L (in V ) with
the eigenspaces of the matrix representation A of L (in Rn).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 25 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

. However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular. By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero. In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

. However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular. By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero. In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

.

However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular. By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero. In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

.

However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular. By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero. In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

. However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular. By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero. In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

. However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular.

By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero. In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

. However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular. By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero.

In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Definition
For an n × n matrix A, the characteristic polynomial of A is given by

pA(t) := Det(A− tI)

Note the matrix here is not strictly numerical. Precisely,

(A− tI)(i , j) =
{

A(i , j) if i 6= j
A(i , i)− t if i = j

. However, the determinant of such

a matrix (using either one of the two equivalent definitions given above) is
still well-defined.

We saw previously that λ is an eigenvalue of A iff A− λI is singular. By
the properties of the determinant listed above, we see that A− λI is
singular iff its determinant is equal to zero. In other words,

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 26 / 68

The Characteristic Polynomial

Theorem
λ is an eigenvalue of A iff pA(λ) = 0; that is, λ is a root of the
characteristic polynomial pA(t) of A.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 27 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 2
4 −5

]
.

Then

pA(t) = Det
([

(2− t) 2
4 (−5− t)

])
= (2− t)(−5− t)− 8 =

t2 + 3t − 18 = (t + 6)(t − 3), indicating that there are two eigenvalues;
λ = −6 and λ = 3. Moreover, for each of these eigenvalues, we can find a
basis for the corresponding eigenspace by computing a basis for the
nullspace N(A− λId).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 28 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 2
4 −5

]
. Then

pA(t) = Det
([

(2− t) 2
4 (−5− t)

])
= (2− t)(−5− t)− 8 =

t2 + 3t − 18 = (t + 6)(t − 3), indicating that there are two eigenvalues;
λ = −6 and λ = 3.

Moreover, for each of these eigenvalues, we can find a
basis for the corresponding eigenspace by computing a basis for the
nullspace N(A− λId).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 28 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 2
4 −5

]
. Then

pA(t) = Det
([

(2− t) 2
4 (−5− t)

])
= (2− t)(−5− t)− 8 =

t2 + 3t − 18 = (t + 6)(t − 3), indicating that there are two eigenvalues;
λ = −6 and λ = 3. Moreover, for each of these eigenvalues, we can find a
basis for the corresponding eigenspace by computing a basis for the
nullspace N(A− λId).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 28 / 68

The Characteristic Polynomial

Examples

For λ = −6, E−6(A) = N(A + 6Id) = N
([

8 2
4 1

])
. Then

E−6(A) = N(A + 6Id) = Span
{[
−1
4

]}
.

For λ = 3, E3(A) = N(A− 3Id) = N
([
−1 2
4 −8

])
. Then

E3(A) = N(A− 3Id) = Span
{[

2
1

]}
.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 29 / 68

The Characteristic Polynomial

Examples

For λ = −6, E−6(A) = N(A + 6Id) = N
([

8 2
4 1

])
. Then

E−6(A) = N(A + 6Id) = Span
{[
−1
4

]}
.

For λ = 3, E3(A) = N(A− 3Id) = N
([
−1 2
4 −8

])
. Then

E3(A) = N(A− 3Id) = Span
{[

2
1

]}
.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 29 / 68

The Characteristic Polynomial

Examples

For λ = −6, E−6(A) = N(A + 6Id) = N
([

8 2
4 1

])
. Then

E−6(A) = N(A + 6Id) = Span
{[
−1
4

]}
.

For λ = 3, E3(A) = N(A− 3Id) = N
([
−1 2
4 −8

])
. Then

E3(A) = N(A− 3Id) = Span
{[

2
1

]}
.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 29 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 −4
1 3

]
.

Then

pA(t) = Det
([

(2− t) −4
1 (3− t)

])
= (2− t)(3− t) + 4 = t2 − 5t + 10.

In this case the quadratic formula shows that there are no real roots.
Instead, it shows there are two complex roots which are conjugate:

λ =

25− 40
2

=
5
2
±
(√

15
2

)
i

Because A is a real matrix with complex eigenvalues, any eigenvectors
will also be complex. Complex eigenvalues and eigenvectors will be
discussed in more detail below.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 30 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 −4
1 3

]
. Then

pA(t) = Det
([

(2− t) −4
1 (3− t)

])
= (2− t)(3− t) + 4 = t2 − 5t + 10.

In this case the quadratic formula shows that there are no real roots.
Instead, it shows there are two complex roots which are conjugate:

λ =

25− 40
2

=
5
2
±
(√

15
2

)
i

Because A is a real matrix with complex eigenvalues, any eigenvectors
will also be complex. Complex eigenvalues and eigenvectors will be
discussed in more detail below.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 30 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 −4
1 3

]
. Then

pA(t) = Det
([

(2− t) −4
1 (3− t)

])
= (2− t)(3− t) + 4 = t2 − 5t + 10.

In this case the quadratic formula shows that there are no real roots.

Instead, it shows there are two complex roots which are conjugate:

λ =

25− 40
2

=
5
2
±
(√

15
2

)
i

Because A is a real matrix with complex eigenvalues, any eigenvectors
will also be complex. Complex eigenvalues and eigenvectors will be
discussed in more detail below.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 30 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 −4
1 3

]
. Then

pA(t) = Det
([

(2− t) −4
1 (3− t)

])
= (2− t)(3− t) + 4 = t2 − 5t + 10.

In this case the quadratic formula shows that there are no real roots.
Instead, it shows there are two complex roots which are conjugate:

λ =

25− 40
2

=
5
2
±
(√

15
2

)
i

Because A is a real matrix with complex eigenvalues, any eigenvectors
will also be complex. Complex eigenvalues and eigenvectors will be
discussed in more detail below.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 30 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 −4
1 3

]
. Then

pA(t) = Det
([

(2− t) −4
1 (3− t)

])
= (2− t)(3− t) + 4 = t2 − 5t + 10.

In this case the quadratic formula shows that there are no real roots.
Instead, it shows there are two complex roots which are conjugate:

λ =

25− 40
2

=
5
2
±
(√

15
2

)
i

Because A is a real matrix with complex eigenvalues, any eigenvectors
will also be complex. Complex eigenvalues and eigenvectors will be
discussed in more detail below.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 30 / 68

The Characteristic Polynomial

Examples

Consider the matrix A =
[

2 −4
1 3

]
. Then

pA(t) = Det
([

(2− t) −4
1 (3− t)

])
= (2− t)(3− t) + 4 = t2 − 5t + 10.

In this case the quadratic formula shows that there are no real roots.
Instead, it shows there are two complex roots which are conjugate:

λ =

25− 40
2

=
5
2
±
(√

15
2

)
i

Because A is a real matrix with complex eigenvalues, any eigenvectors
will also be complex. Complex eigenvalues and eigenvectors will be
discussed in more detail below.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 30 / 68

The Characteristic Polynomial

The following is an immediate consequence of the definition of pA(t), and
basic properties of polynomials.

Theorem
If A is an n × n matrix, then pA(t) is a polynomial of degree n.
Consequently, A can have at most n distinct eigenvalues (real or complex).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 31 / 68

The Characteristic Polynomial

The following is an immediate consequence of the definition of pA(t), and
basic properties of polynomials.

Theorem
If A is an n × n matrix, then pA(t) is a polynomial of degree n.
Consequently, A can have at most n distinct eigenvalues (real or complex).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 31 / 68

The Characteristic Polynomial

Finally, suppose we are given a linear transformation L : V → V with
Dim(V ) = n. We would like to define the characteristic polynomial of L.
The natural thing to do is to fix a basis S of V and set

pL(t) := pA(t), A = SLS

In order for this definition to be valid, we will want to know that the
resulting polynomial does not depend on the particular choice of basis.

Theorem
If S,S ′ are two bases for V , and A = SLS , A′ = S′LS′ , then
pA(t) = pA′(t). Thus the expression for pL(t) given above is independent
of the particular choice of basis for V .

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 32 / 68

The Characteristic Polynomial

Finally, suppose we are given a linear transformation L : V → V with
Dim(V ) = n. We would like to define the characteristic polynomial of L.
The natural thing to do is to fix a basis S of V and set

pL(t) := pA(t), A = SLS

In order for this definition to be valid, we will want to know that the
resulting polynomial does not depend on the particular choice of basis.

Theorem
If S,S ′ are two bases for V , and A = SLS , A′ = S′LS′ , then
pA(t) = pA′(t). Thus the expression for pL(t) given above is independent
of the particular choice of basis for V .

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 32 / 68

The Characteristic Polynomial

Finally, suppose we are given a linear transformation L : V → V with
Dim(V ) = n. We would like to define the characteristic polynomial of L.
The natural thing to do is to fix a basis S of V and set

pL(t) := pA(t), A = SLS

In order for this definition to be valid, we will want to know that the
resulting polynomial does not depend on the particular choice of basis.

Theorem
If S,S ′ are two bases for V , and A = SLS , A′ = S′LS′ , then
pA(t) = pA′(t). Thus the expression for pL(t) given above is independent
of the particular choice of basis for V .

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 32 / 68

The Characteristic Polynomial

Finally, suppose we are given a linear transformation L : V → V with
Dim(V ) = n. We would like to define the characteristic polynomial of L.
The natural thing to do is to fix a basis S of V and set

pL(t) := pA(t), A = SLS

In order for this definition to be valid, we will want to know that the
resulting polynomial does not depend on the particular choice of basis.

Theorem
If S,S ′ are two bases for V , and A = SLS , A′ = S′LS′ , then
pA(t) = pA′(t). Thus the expression for pL(t) given above is independent
of the particular choice of basis for V .

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 32 / 68

Direct Sum Decomposition

Direct sum of two subspaces – definition
A vector space V is a sum of subspaces W1, W2 – written as
V = W1 + W2 – if every vector in v ∈ V can be written as v = w1 + w2
with wi ∈Wi (that is V = Span{W1,W2}).

The vector space V is a direct sum of two subspaces W1,W2 if
V = W1 + W2;
W1 ∩W2 = {0}.

Another way of expressing this is to say that every vector in V can be
written uniquely as a sum of i) a vector in W1 and ii) a vector in W2.

If V is a direct sum of W1 and W2, we write V = W1 ⊕W2.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 33 / 68

Direct Sum Decomposition

Direct sum of two subspaces – definition
A vector space V is a sum of subspaces W1, W2 – written as
V = W1 + W2 – if every vector in v ∈ V can be written as v = w1 + w2
with wi ∈Wi (that is V = Span{W1,W2}).

The vector space V is a direct sum of two subspaces W1,W2 if

V = W1 + W2;
W1 ∩W2 = {0}.

Another way of expressing this is to say that every vector in V can be
written uniquely as a sum of i) a vector in W1 and ii) a vector in W2.

If V is a direct sum of W1 and W2, we write V = W1 ⊕W2.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 33 / 68

Direct Sum Decomposition

Direct sum of two subspaces – definition
A vector space V is a sum of subspaces W1, W2 – written as
V = W1 + W2 – if every vector in v ∈ V can be written as v = w1 + w2
with wi ∈Wi (that is V = Span{W1,W2}).

The vector space V is a direct sum of two subspaces W1,W2 if
V = W1 + W2;

W1 ∩W2 = {0}.

Another way of expressing this is to say that every vector in V can be
written uniquely as a sum of i) a vector in W1 and ii) a vector in W2.

If V is a direct sum of W1 and W2, we write V = W1 ⊕W2.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 33 / 68

Direct Sum Decomposition

Direct sum of two subspaces – definition
A vector space V is a sum of subspaces W1, W2 – written as
V = W1 + W2 – if every vector in v ∈ V can be written as v = w1 + w2
with wi ∈Wi (that is V = Span{W1,W2}).

The vector space V is a direct sum of two subspaces W1,W2 if
V = W1 + W2;
W1 ∩W2 = {0}.

Another way of expressing this is to say that every vector in V can be
written uniquely as a sum of i) a vector in W1 and ii) a vector in W2.

If V is a direct sum of W1 and W2, we write V = W1 ⊕W2.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 33 / 68

Direct Sum Decomposition

Direct sum of two subspaces – definition
A vector space V is a sum of subspaces W1, W2 – written as
V = W1 + W2 – if every vector in v ∈ V can be written as v = w1 + w2
with wi ∈Wi (that is V = Span{W1,W2}).

The vector space V is a direct sum of two subspaces W1,W2 if
V = W1 + W2;
W1 ∩W2 = {0}.

Another way of expressing this is to say that every vector in V can be
written uniquely as a sum of i) a vector in W1 and ii) a vector in W2.

If V is a direct sum of W1 and W2, we write V = W1 ⊕W2.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 33 / 68

Direct Sum Decomposition

Direct sum of two subspaces – definition
A vector space V is a sum of subspaces W1, W2 – written as
V = W1 + W2 – if every vector in v ∈ V can be written as v = w1 + w2
with wi ∈Wi (that is V = Span{W1,W2}).

The vector space V is a direct sum of two subspaces W1,W2 if
V = W1 + W2;
W1 ∩W2 = {0}.

Another way of expressing this is to say that every vector in V can be
written uniquely as a sum of i) a vector in W1 and ii) a vector in W2.

If V is a direct sum of W1 and W2, we write V = W1 ⊕W2.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 33 / 68

Direct Sum Decomposition

A similar description applies more generally to an m-fold direct sum.

Direct sum of a collection of subspaces – definition

V is a direct sum of subspaces Wi , 1 ≤ i ≤ m if
Every v ∈ V can be written as v =

∑m
i=1 wi for wi ∈Wi ;

Wi ∩
(∑

j 6=i Wj
)

= {0}.

Theorem
V = W1 ⊕W2 ⊕ · · · ⊕Wm if and only if every vector v ∈ V can be
expressed uniquely as v =

∑m
i=1 wi with wi ∈Wi , 1 ≤ i ≤ m.

When V is a direct sum of the subspaces {Wi} we indicate this by writing
either V =

⊕m
i=1 Wi , or V = W1 ⊕W2 ⊕ · · · ⊕Wm.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 34 / 68

Direct Sum Decomposition

A similar description applies more generally to an m-fold direct sum.

Direct sum of a collection of subspaces – definition

V is a direct sum of subspaces Wi , 1 ≤ i ≤ m if
Every v ∈ V can be written as v =

∑m
i=1 wi for wi ∈Wi ;

Wi ∩
(∑

j 6=i Wj
)

= {0}.

Theorem
V = W1 ⊕W2 ⊕ · · · ⊕Wm if and only if every vector v ∈ V can be
expressed uniquely as v =

∑m
i=1 wi with wi ∈Wi , 1 ≤ i ≤ m.

When V is a direct sum of the subspaces {Wi} we indicate this by writing
either V =

⊕m
i=1 Wi , or V = W1 ⊕W2 ⊕ · · · ⊕Wm.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 34 / 68

Direct Sum Decomposition

A similar description applies more generally to an m-fold direct sum.

Direct sum of a collection of subspaces – definition
V is a direct sum of subspaces Wi , 1 ≤ i ≤ m if

Every v ∈ V can be written as v =
∑m

i=1 wi for wi ∈Wi ;
Wi ∩

(∑
j 6=i Wj

)
= {0}.

Theorem
V = W1 ⊕W2 ⊕ · · · ⊕Wm if and only if every vector v ∈ V can be
expressed uniquely as v =

∑m
i=1 wi with wi ∈Wi , 1 ≤ i ≤ m.

When V is a direct sum of the subspaces {Wi} we indicate this by writing
either V =

⊕m
i=1 Wi , or V = W1 ⊕W2 ⊕ · · · ⊕Wm.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 34 / 68

Direct Sum Decomposition

A similar description applies more generally to an m-fold direct sum.

Direct sum of a collection of subspaces – definition
V is a direct sum of subspaces Wi , 1 ≤ i ≤ m if

Every v ∈ V can be written as v =
∑m

i=1 wi for wi ∈Wi ;

Wi ∩
(∑

j 6=i Wj
)

= {0}.

Theorem
V = W1 ⊕W2 ⊕ · · · ⊕Wm if and only if every vector v ∈ V can be
expressed uniquely as v =

∑m
i=1 wi with wi ∈Wi , 1 ≤ i ≤ m.

When V is a direct sum of the subspaces {Wi} we indicate this by writing
either V =

⊕m
i=1 Wi , or V = W1 ⊕W2 ⊕ · · · ⊕Wm.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 34 / 68

Direct Sum Decomposition

A similar description applies more generally to an m-fold direct sum.

Direct sum of a collection of subspaces – definition
V is a direct sum of subspaces Wi , 1 ≤ i ≤ m if

Every v ∈ V can be written as v =
∑m

i=1 wi for wi ∈Wi ;
Wi ∩

(∑
j 6=i Wj

)
= {0}.

Theorem
V = W1 ⊕W2 ⊕ · · · ⊕Wm if and only if every vector v ∈ V can be
expressed uniquely as v =

∑m
i=1 wi with wi ∈Wi , 1 ≤ i ≤ m.

When V is a direct sum of the subspaces {Wi} we indicate this by writing
either V =

⊕m
i=1 Wi , or V = W1 ⊕W2 ⊕ · · · ⊕Wm.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 34 / 68

Direct Sum Decomposition

A similar description applies more generally to an m-fold direct sum.

Direct sum of a collection of subspaces – definition
V is a direct sum of subspaces Wi , 1 ≤ i ≤ m if

Every v ∈ V can be written as v =
∑m

i=1 wi for wi ∈Wi ;
Wi ∩

(∑
j 6=i Wj

)
= {0}.

Theorem
V = W1 ⊕W2 ⊕ · · · ⊕Wm if and only if every vector v ∈ V can be
expressed uniquely as v =

∑m
i=1 wi with wi ∈Wi , 1 ≤ i ≤ m.

When V is a direct sum of the subspaces {Wi} we indicate this by writing
either V =

⊕m
i=1 Wi , or V = W1 ⊕W2 ⊕ · · · ⊕Wm.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 34 / 68

Direct Sum Decomposition

A similar description applies more generally to an m-fold direct sum.

Direct sum of a collection of subspaces – definition
V is a direct sum of subspaces Wi , 1 ≤ i ≤ m if

Every v ∈ V can be written as v =
∑m

i=1 wi for wi ∈Wi ;
Wi ∩

(∑
j 6=i Wj

)
= {0}.

Theorem
V = W1 ⊕W2 ⊕ · · · ⊕Wm if and only if every vector v ∈ V can be
expressed uniquely as v =

∑m
i=1 wi with wi ∈Wi , 1 ≤ i ≤ m.

When V is a direct sum of the subspaces {Wi} we indicate this by writing
either V =

⊕m
i=1 Wi , or V = W1 ⊕W2 ⊕ · · · ⊕Wm.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 34 / 68

Direct Sum Decomposition
For a real n × n matrix define ER(A) ⊂ Rn to be the supspace spanned by
the real eigenvectors of A. As we have seen, the number of distinct
possible eigenvalues of A is finite.

Theorem
Given a real n× n matrix A, let λ1, . . . , λm be the distinct real eigenvalues
of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

The same is true over C. Let EC(A) ⊂ Cn to be the supspace of Rn
spanned by the complex eigenvectors of A ∈ Cn×n.

Theorem
Given a complex n × n matrix A, let λ1, . . . , λm be the distinct complex
eigenvalues of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 35 / 68

Direct Sum Decomposition
For a real n × n matrix define ER(A) ⊂ Rn to be the supspace spanned by
the real eigenvectors of A. As we have seen, the number of distinct
possible eigenvalues of A is finite.

Theorem
Given a real n× n matrix A, let λ1, . . . , λm be the distinct real eigenvalues
of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

The same is true over C. Let EC(A) ⊂ Cn to be the supspace of Rn
spanned by the complex eigenvectors of A ∈ Cn×n.

Theorem
Given a complex n × n matrix A, let λ1, . . . , λm be the distinct complex
eigenvalues of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 35 / 68

Direct Sum Decomposition
For a real n × n matrix define ER(A) ⊂ Rn to be the supspace spanned by
the real eigenvectors of A. As we have seen, the number of distinct
possible eigenvalues of A is finite.

Theorem
Given a real n× n matrix A, let λ1, . . . , λm be the distinct real eigenvalues
of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

The same is true over C. Let EC(A) ⊂ Cn to be the supspace of Rn
spanned by the complex eigenvectors of A ∈ Cn×n.

Theorem
Given a complex n × n matrix A, let λ1, . . . , λm be the distinct complex
eigenvalues of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 35 / 68

Direct Sum Decomposition
For a real n × n matrix define ER(A) ⊂ Rn to be the supspace spanned by
the real eigenvectors of A. As we have seen, the number of distinct
possible eigenvalues of A is finite.

Theorem
Given a real n× n matrix A, let λ1, . . . , λm be the distinct real eigenvalues
of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

The same is true over C. Let EC(A) ⊂ Cn to be the supspace of Rn
spanned by the complex eigenvectors of A ∈ Cn×n.

Theorem
Given a complex n × n matrix A, let λ1, . . . , λm be the distinct complex
eigenvalues of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 35 / 68

Direct Sum Decomposition
For a real n × n matrix define ER(A) ⊂ Rn to be the supspace spanned by
the real eigenvectors of A. As we have seen, the number of distinct
possible eigenvalues of A is finite.

Theorem
Given a real n× n matrix A, let λ1, . . . , λm be the distinct real eigenvalues
of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

The same is true over C. Let EC(A) ⊂ Cn to be the supspace of Rn
spanned by the complex eigenvectors of A ∈ Cn×n.

Theorem
Given a complex n × n matrix A, let λ1, . . . , λm be the distinct complex
eigenvalues of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 35 / 68

Direct Sum Decomposition
For a real n × n matrix define ER(A) ⊂ Rn to be the supspace spanned by
the real eigenvectors of A. As we have seen, the number of distinct
possible eigenvalues of A is finite.

Theorem
Given a real n× n matrix A, let λ1, . . . , λm be the distinct real eigenvalues
of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)

The same is true over C. Let EC(A) ⊂ Cn to be the supspace of Rn
spanned by the complex eigenvectors of A ∈ Cn×n.

Theorem
Given a complex n × n matrix A, let λ1, . . . , λm be the distinct complex
eigenvalues of A. Then

E (A) = Eλ1(A)⊕ Eλ2(A)⊕ · · · ⊕ Eλm (A)
Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 35 / 68

Direct Sum Decomposition

More generally, let L : V → V be a linear self-map on V (as before).
Without appeal to bases, we write E (L) for the linear span of the
eigenvectors of L in V . Then by exactly the same argument one has

Theorem
If λ1, . . . , λm are the distinct real eigenvalues of L, then

E (L) = Eλ1(L)⊕ Eλ2(L)⊕ · · · ⊕ Eλm (L)

In the case E (L) = V , this yields a direct sum decomposition of V into
eigenspaces of L, which is a very useful thing to know (in the cases it
occurs). We will investigate this property in more detail next.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 36 / 68

Direct Sum Decomposition

More generally, let L : V → V be a linear self-map on V (as before).
Without appeal to bases, we write E (L) for the linear span of the
eigenvectors of L in V . Then by exactly the same argument one has

Theorem
If λ1, . . . , λm are the distinct real eigenvalues of L, then

E (L) = Eλ1(L)⊕ Eλ2(L)⊕ · · · ⊕ Eλm (L)

In the case E (L) = V , this yields a direct sum decomposition of V into
eigenspaces of L, which is a very useful thing to know (in the cases it
occurs). We will investigate this property in more detail next.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 36 / 68

Direct Sum Decomposition

More generally, let L : V → V be a linear self-map on V (as before).
Without appeal to bases, we write E (L) for the linear span of the
eigenvectors of L in V . Then by exactly the same argument one has

Theorem
If λ1, . . . , λm are the distinct real eigenvalues of L, then

E (L) = Eλ1(L)⊕ Eλ2(L)⊕ · · · ⊕ Eλm (L)

In the case E (L) = V , this yields a direct sum decomposition of V into
eigenspaces of L, which is a very useful thing to know (in the cases it
occurs). We will investigate this property in more detail next.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 36 / 68

Similarity and Diagonalization

Similarity – definition
Two square matrices A and B are said to be similar if there exists an
invertible matrix S such that

B = S ∗ A ∗ S−1

The matrix S appearing in this equation is referred to as a similarity
matrix. Similarity represents an important equivalence relation on the
vector space of square matrices of a given dimension; in particular, A is
similar to B iff B is similar to A. A number of important aspects of a
square matrix remain unchanged under this relation.

Theorem
If A and B are similar, then pA(t) = pB(t). In particular,
Det(A) = Det(B).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 37 / 68

Similarity and Diagonalization

Similarity – definition
Two square matrices A and B are said to be similar if there exists an
invertible matrix S such that

B = S ∗ A ∗ S−1

The matrix S appearing in this equation is referred to as a similarity
matrix. Similarity represents an important equivalence relation on the
vector space of square matrices of a given dimension; in particular, A is
similar to B iff B is similar to A. A number of important aspects of a
square matrix remain unchanged under this relation.

Theorem
If A and B are similar, then pA(t) = pB(t). In particular,
Det(A) = Det(B).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 37 / 68

Similarity and Diagonalization

Similarity – definition
Two square matrices A and B are said to be similar if there exists an
invertible matrix S such that

B = S ∗ A ∗ S−1

The matrix S appearing in this equation is referred to as a similarity
matrix. Similarity represents an important equivalence relation on the
vector space of square matrices of a given dimension; in particular, A is
similar to B iff B is similar to A. A number of important aspects of a
square matrix remain unchanged under this relation.

Theorem
If A and B are similar, then pA(t) = pB(t). In particular,
Det(A) = Det(B).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 37 / 68

Similarity and Diagonalization

Thus similar matrices have the same eigenvalues, occurring with the same
multiplicity. Moreover, their eigenvectors are related.

Theorem
If B = S ∗ A ∗ S−1, and v is an eigenvector of A with eigenvalue λ, then
S ∗ v is an eigenvector of B with eigenvalue λ.

More generally, if
LS : Rn → Rn is the linear transformation LS(w) = S ∗w, then LS induces
an isomorphism on eigenspaces LS : Eλ(A)

∼=−→ Eλ(B) for all eigenvalues λ.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 38 / 68

Similarity and Diagonalization

Thus similar matrices have the same eigenvalues, occurring with the same
multiplicity. Moreover, their eigenvectors are related.

Theorem
If B = S ∗ A ∗ S−1, and v is an eigenvector of A with eigenvalue λ, then
S ∗ v is an eigenvector of B with eigenvalue λ. More generally, if
LS : Rn → Rn is the linear transformation LS(w) = S ∗w, then LS induces
an isomorphism on eigenspaces LS : Eλ(A)

∼=−→ Eλ(B) for all eigenvalues λ.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 38 / 68

Similarity and Diagonalization

For a real n × n matrix A, when does Rn decompose as a direct sum of
eigenspaces of a matrix A?

To answer this, we consider first the case when
A = D a diagonal matrix with D(i , i) = λi , 1 ≤ i ≤ n. For such a matrix,
each standard basis vector ei is an eigenvector, as D ∗ ei = λiei for each
1 ≤ i ≤ n. So for such a matrix one has an evident direct sum
decomposition

Rn =

λ

Eλ(A), Eλ(A) = Span{ei | λi = λ}

If A ∈ Rn×n, we will say that A is real-diagonalizable if A is similar to a
diagonal matrix, where the similarity matrix is also in Rn×n (this is one of
the places where one has to be careful whether one is working over the
real or complex numbers).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 39 / 68

Similarity and Diagonalization

For a real n × n matrix A, when does Rn decompose as a direct sum of
eigenspaces of a matrix A? To answer this, we consider first the case when
A = D a diagonal matrix with D(i , i) = λi , 1 ≤ i ≤ n. For such a matrix,
each standard basis vector ei is an eigenvector, as D ∗ ei = λiei for each
1 ≤ i ≤ n. So for such a matrix one has an evident direct sum
decomposition

Rn =

λ

Eλ(A), Eλ(A) = Span{ei | λi = λ}

If A ∈ Rn×n, we will say that A is real-diagonalizable if A is similar to a
diagonal matrix, where the similarity matrix is also in Rn×n (this is one of
the places where one has to be careful whether one is working over the
real or complex numbers).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 39 / 68

Similarity and Diagonalization

For a real n × n matrix A, when does Rn decompose as a direct sum of
eigenspaces of a matrix A? To answer this, we consider first the case when
A = D a diagonal matrix with D(i , i) = λi , 1 ≤ i ≤ n. For such a matrix,
each standard basis vector ei is an eigenvector, as D ∗ ei = λiei for each
1 ≤ i ≤ n. So for such a matrix one has an evident direct sum
decomposition

Rn =

λ

Eλ(A), Eλ(A) = Span{ei | λi = λ}

If A ∈ Rn×n, we will say that A is real-diagonalizable if A is similar to a
diagonal matrix, where the similarity matrix is also in Rn×n (this is one of
the places where one has to be careful whether one is working over the
real or complex numbers).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 39 / 68

Similarity and Diagonalization

For a real n × n matrix A, when does Rn decompose as a direct sum of
eigenspaces of a matrix A? To answer this, we consider first the case when
A = D a diagonal matrix with D(i , i) = λi , 1 ≤ i ≤ n. For such a matrix,
each standard basis vector ei is an eigenvector, as D ∗ ei = λiei for each
1 ≤ i ≤ n. So for such a matrix one has an evident direct sum
decomposition

Rn =

λ

Eλ(A), Eλ(A) = Span{ei | λi = λ}

If A ∈ Rn×n, we will say that A is real-diagonalizable if A is similar to a
diagonal matrix, where the similarity matrix is also in Rn×n (this is one of
the places where one has to be careful whether one is working over the
real or complex numbers).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 39 / 68

Similarity and Diagonalization

Theorem
Given A ∈ Rn×n, the following statements are equivalent:

1 Rn has a basis consisting of eigenvectors of A.
2 Rn can be written as a direct sum of eigenspaces of A.
3 A is real-diagonalizable.

Many matrices are diagonalizable, but there are also many that are not.
Moreover, a real matrix might be diagonalizable, but not
real-diagonalizable.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 40 / 68

Similarity and Diagonalization

Theorem
Given A ∈ Rn×n, the following statements are equivalent:

1 Rn has a basis consisting of eigenvectors of A.

2 Rn can be written as a direct sum of eigenspaces of A.
3 A is real-diagonalizable.

Many matrices are diagonalizable, but there are also many that are not.
Moreover, a real matrix might be diagonalizable, but not
real-diagonalizable.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 40 / 68

Similarity and Diagonalization

Theorem
Given A ∈ Rn×n, the following statements are equivalent:

1 Rn has a basis consisting of eigenvectors of A.
2 Rn can be written as a direct sum of eigenspaces of A.

3 A is real-diagonalizable.

Many matrices are diagonalizable, but there are also many that are not.
Moreover, a real matrix might be diagonalizable, but not
real-diagonalizable.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 40 / 68

Similarity and Diagonalization

Theorem
Given A ∈ Rn×n, the following statements are equivalent:

1 Rn has a basis consisting of eigenvectors of A.
2 Rn can be written as a direct sum of eigenspaces of A.
3 A is real-diagonalizable.

Many matrices are diagonalizable, but there are also many that are not.
Moreover, a real matrix might be diagonalizable, but not
real-diagonalizable.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 40 / 68

Similarity and Diagonalization

Theorem
Given A ∈ Rn×n, the following statements are equivalent:

1 Rn has a basis consisting of eigenvectors of A.
2 Rn can be written as a direct sum of eigenspaces of A.
3 A is real-diagonalizable.

Many matrices are diagonalizable, but there are also many that are not.
Moreover, a real matrix might be diagonalizable, but not
real-diagonalizable.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 40 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis. If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,
V = eTS

the transition matrix from S to e. Therefore, if we view A as the matrix
representation of a linear transformation L, one has

D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis.

If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,
V = eTS

the transition matrix from S to e. Therefore, if we view A as the matrix
representation of a linear transformation L, one has

D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis. If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,
V = eTS

the transition matrix from S to e. Therefore, if we view A as the matrix
representation of a linear transformation L, one has

D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis. If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,
V = eTS

the transition matrix from S to e. Therefore, if we view A as the matrix
representation of a linear transformation L, one has

D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis. If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,

V = eTS
the transition matrix from S to e. Therefore, if we view A as the matrix

representation of a linear transformation L, one has
D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis. If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,
V = eTS

the transition matrix from S to e. Therefore, if we view A as the matrix
representation of a linear transformation L, one has

D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis. If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,
V = eTS

the transition matrix from S to e. Therefore, if we view A as the matrix
representation of a linear transformation L, one has

D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Relation to Base-Change
By the above theorem, A is a real n× n matrix which is real diagonalizable
iff there is a matrix V with V−1 ∗ A ∗ V = D. In other words, V is
non-singular and A ∗ V = V ∗ D.

The columns of the non-singular matrix V are eigenvectors of the matrix
A; more precisely they are the coordinate vectors of those eigenvectors in
the standard basis. If we denote this basis by S = {v1, . . . , vn}, then the
matrix V can be expressed as

V =
[

ev1 ev2 . . . evn
]

from which we see that V is itself a base-transition matrix; precisely,
V = eTS

the transition matrix from S to e. Therefore, if we view A as the matrix
representation of a linear transformation L, one has

D = SLS = STe ∗ eLe ∗ eTS = V−1 ∗ A ∗ V

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 41 / 68

Similarity and Diagonalization – Real-diagonalizable vs
Diagonalizable

Closely related to the difference between real-diagonalizable and
diagonalizable is the fact that real matrices need not have real eigenvalues.
In fact, some real matrices have no eigenvalues over the real numbers.

Examples

Let A =
[

0 1
−1 0

]
. Then charactericstic polynomial of A is pA(t) = t2 + 1,

which is an irreducible quadratic polynomial with no real roots. Thus A
has no eigenvectors in R2.

What should be done with such matrices? The answer is that even if one
is primarily concerned with real matrices and working over the real
numbers, there are cases where one needs to enlarge the set of scalars to
C. This is one of those cases.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 42 / 68

Similarity and Diagonalization – Real-diagonalizable vs
Diagonalizable

Closely related to the difference between real-diagonalizable and
diagonalizable is the fact that real matrices need not have real eigenvalues.
In fact, some real matrices have no eigenvalues over the real numbers.

Examples

Let A =
[

0 1
−1 0

]
. Then charactericstic polynomial of A is pA(t) = t2 + 1,

which is an irreducible quadratic polynomial with no real roots. Thus A
has no eigenvectors in R2.

What should be done with such matrices? The answer is that even if one
is primarily concerned with real matrices and working over the real
numbers, there are cases where one needs to enlarge the set of scalars to
C. This is one of those cases.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 42 / 68

Similarity and Diagonalization – Real-diagonalizable vs
Diagonalizable

Closely related to the difference between real-diagonalizable and
diagonalizable is the fact that real matrices need not have real eigenvalues.
In fact, some real matrices have no eigenvalues over the real numbers.

Examples

Let A =
[

0 1
−1 0

]
. Then charactericstic polynomial of A is pA(t) = t2 + 1,

which is an irreducible quadratic polynomial with no real roots. Thus A
has no eigenvectors in R2.

What should be done with such matrices? The answer is that even if one
is primarily concerned with real matrices and working over the real
numbers, there are cases where one needs to enlarge the set of scalars to
C. This is one of those cases.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 42 / 68

Complex Eigenvalues and Eigenvectors

All of the constructions we have done so far over R extend naturally to C,
with some slight adjustment for the case of inner products (we will discuss
this in more detail below). For now, the main reason for considering
complex numbers has to do with the factorization of polynomials. The key
result one wants to know (whose proof involves techniques well beyond the
scope of linear algebra) is

The Fundamental Theorem of Algebra
Any non-constant polynomial p(z) with complex coefficients has a
complex root. Consequently, any non-constant polynomial with real or
complex coefficients can be factored over C into a product of linear terms

p(z) = c(z − r1)(z − r2) . . . (z − rn), c, r1, r2, . . . , rn ∈ C

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 43 / 68

Complex Eigenvalues and Eigenvectors

All of the constructions we have done so far over R extend naturally to C,
with some slight adjustment for the case of inner products (we will discuss
this in more detail below). For now, the main reason for considering
complex numbers has to do with the factorization of polynomials. The key
result one wants to know (whose proof involves techniques well beyond the
scope of linear algebra) is

The Fundamental Theorem of Algebra
Any non-constant polynomial p(z) with complex coefficients has a
complex root. Consequently, any non-constant polynomial with real or
complex coefficients can be factored over C into a product of linear terms

p(z) = c(z − r1)(z − r2) . . . (z − rn), c, r1, r2, . . . , rn ∈ C

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 43 / 68

Complex Eigenvalues and Eigenvectors

All of the constructions we have done so far over R extend naturally to C,
with some slight adjustment for the case of inner products (we will discuss
this in more detail below). For now, the main reason for considering
complex numbers has to do with the factorization of polynomials. The key
result one wants to know (whose proof involves techniques well beyond the
scope of linear algebra) is

The Fundamental Theorem of Algebra
Any non-constant polynomial p(z) with complex coefficients has a
complex root. Consequently, any non-constant polynomial with real or
complex coefficients can be factored over C into a product of linear terms

p(z) = c(z − r1)(z − r2) . . . (z − rn), c, r1, r2, . . . , rn ∈ C

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 43 / 68

Complex Eigenvalues and Eigenvectors

All of the constructions we have done so far over R extend naturally to C,
with some slight adjustment for the case of inner products (we will discuss
this in more detail below). For now, the main reason for considering
complex numbers has to do with the factorization of polynomials. The key
result one wants to know (whose proof involves techniques well beyond the
scope of linear algebra) is

The Fundamental Theorem of Algebra
Any non-constant polynomial p(z) with complex coefficients has a
complex root. Consequently, any non-constant polynomial with real or
complex coefficients can be factored over C into a product of linear terms

p(z) = c(z − r1)(z − r2) . . . (z − rn), c, r1, r2, . . . , rn ∈ C

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 43 / 68

Complex Eigenvalues and Eigenvectors
In particular, the characteristic polynomial of a real or complex matrix will
always factor completely into linear terms over the complex numbers. If we
restrict to working over the real numbers we have the following

Theorem
If p = p(x) is a polynomial of degree n with real coefficients, then p can
be factored as a product

p(x) = c(x − r1)(x − r2) . . . (x − rk)q1(x)q2(x) . . . ql (x)

where
ri ∈ R, 1 ≤ i ≤ k are the real roots of p;
each qj(x) = x2 + bjx + cj is an irreducible quadratic that over C
factors as qj(x) = (x − zj)(x − zj);
n = k + 2l .

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 44 / 68

Complex Eigenvalues and Eigenvectors
In particular, the characteristic polynomial of a real or complex matrix will
always factor completely into linear terms over the complex numbers. If we
restrict to working over the real numbers we have the following

Theorem
If p = p(x) is a polynomial of degree n with real coefficients, then p can
be factored as a product

p(x) = c(x − r1)(x − r2) . . . (x − rk)q1(x)q2(x) . . . ql (x)

where
ri ∈ R, 1 ≤ i ≤ k are the real roots of p;
each qj(x) = x2 + bjx + cj is an irreducible quadratic that over C
factors as qj(x) = (x − zj)(x − zj);
n = k + 2l .

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 44 / 68

Complex Eigenvalues and Eigenvectors
In particular, the characteristic polynomial of a real or complex matrix will
always factor completely into linear terms over the complex numbers. If we
restrict to working over the real numbers we have the following

Theorem
If p = p(x) is a polynomial of degree n with real coefficients, then p can
be factored as a product

p(x) = c(x − r1)(x − r2) . . . (x − rk)q1(x)q2(x) . . . ql (x)

where

ri ∈ R, 1 ≤ i ≤ k are the real roots of p;
each qj(x) = x2 + bjx + cj is an irreducible quadratic that over C
factors as qj(x) = (x − zj)(x − zj);
n = k + 2l .

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 44 / 68

Complex Eigenvalues and Eigenvectors
In particular, the characteristic polynomial of a real or complex matrix will
always factor completely into linear terms over the complex numbers. If we
restrict to working over the real numbers we have the following

Theorem
If p = p(x) is a polynomial of degree n with real coefficients, then p can
be factored as a product

p(x) = c(x − r1)(x − r2) . . . (x − rk)q1(x)q2(x) . . . ql (x)

where
ri ∈ R, 1 ≤ i ≤ k are the real roots of p;

each qj(x) = x2 + bjx + cj is an irreducible quadratic that over C
factors as qj(x) = (x − zj)(x − zj);
n = k + 2l .

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 44 / 68

Complex Eigenvalues and Eigenvectors
In particular, the characteristic polynomial of a real or complex matrix will
always factor completely into linear terms over the complex numbers. If we
restrict to working over the real numbers we have the following

Theorem
If p = p(x) is a polynomial of degree n with real coefficients, then p can
be factored as a product

p(x) = c(x − r1)(x − r2) . . . (x − rk)q1(x)q2(x) . . . ql (x)

where
ri ∈ R, 1 ≤ i ≤ k are the real roots of p;
each qj(x) = x2 + bjx + cj is an irreducible quadratic that over C
factors as qj(x) = (x − zj)(x − zj);

n = k + 2l .
Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 44 / 68

Complex Eigenvalues and Eigenvectors
In particular, the characteristic polynomial of a real or complex matrix will
always factor completely into linear terms over the complex numbers. If we
restrict to working over the real numbers we have the following

Theorem
If p = p(x) is a polynomial of degree n with real coefficients, then p can
be factored as a product

p(x) = c(x − r1)(x − r2) . . . (x − rk)q1(x)q2(x) . . . ql (x)

where
ri ∈ R, 1 ≤ i ≤ k are the real roots of p;
each qj(x) = x2 + bjx + cj is an irreducible quadratic that over C
factors as qj(x) = (x − zj)(x − zj);
n = k + 2l .

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 44 / 68

Complex Eigenvalues and Eigenvectors
In particular, the characteristic polynomial of a real or complex matrix will
always factor completely into linear terms over the complex numbers. If we
restrict to working over the real numbers we have the following

Theorem
If p = p(x) is a polynomial of degree n with real coefficients, then p can
be factored as a product

p(x) = c(x − r1)(x − r2) . . . (x − rk)q1(x)q2(x) . . . ql (x)

where
ri ∈ R, 1 ≤ i ≤ k are the real roots of p;
each qj(x) = x2 + bjx + cj is an irreducible quadratic that over C
factors as qj(x) = (x − zj)(x − zj);
n = k + 2l .

Moreover, this factorization of p is unique up to reordering of the terms.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 44 / 68

Complex Eigenvalues and Eigenvectors

With these theorems in mind, let’s take a closer look at the example from
the previous section.

Examples

The matrix A =
[

0 1
−1 0

]
has a characteristic polynomial pA(t) = t2 + 1,

which is irreducible over R (has no real roots). consequently, it has no real
eigenvectors in R2. And it is easy to see why, geometrically; the action of
left-multiplication by A corresponds to clockwise rotation by 90◦.

Factoring q over C, we get q(t) = (t − i)(t + i), where i =

−1. Because

an eigenspace must have dimension greater than or equal to 1, and the
dimension of C2 (as a vector space over C) is 2, we can conclude that
both Ei (A) and E−i (A) must be 1-dimensional vector spaces over C.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 45 / 68

Complex Eigenvalues and Eigenvectors

With these theorems in mind, let’s take a closer look at the example from
the previous section.

Examples

The matrix A =
[

0 1
−1 0

]
has a characteristic polynomial pA(t) = t2 + 1,

which is irreducible over R (has no real roots). consequently, it has no real
eigenvectors in R2. And it is easy to see why, geometrically; the action of
left-multiplication by A corresponds to clockwise rotation by 90◦.

Factoring q over C, we get q(t) = (t − i)(t + i), where i =

−1. Because

an eigenspace must have dimension greater than or equal to 1, and the
dimension of C2 (as a vector space over C) is 2, we can conclude that
both Ei (A) and E−i (A) must be 1-dimensional vector spaces over C.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 45 / 68

Complex Eigenvalues and Eigenvectors

With these theorems in mind, let’s take a closer look at the example from
the previous section.

Examples

The matrix A =
[

0 1
−1 0

]
has a characteristic polynomial pA(t) = t2 + 1,

which is irreducible over R (has no real roots). consequently, it has no real
eigenvectors in R2. And it is easy to see why, geometrically; the action of
left-multiplication by A corresponds to clockwise rotation by 90◦.

Factoring q over C, we get q(t) = (t − i)(t + i), where i =

−1. Because

an eigenspace must have dimension greater than or equal to 1, and the
dimension of C2 (as a vector space over C) is 2, we can conclude that
both Ei (A) and E−i (A) must be 1-dimensional vector spaces over C.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 45 / 68

Complex Eigenvalues and Eigenvectors

Examples

A vector v =
[

z1
z2

]
is an eigenvector of A corresponding to the eigenvector

i precisely when iz1 = z2. Similarly, it is an eigenvector of A corresponding
to the eigenvector −i precisely when −iz1 = z2. Thus

Ei (A) = span
{[

1
i

]}
, and E−i (A) = span

{[
1
−i

]}
.

The matrix A is an example of a real matrix which is not

real-diagonalizable, but is diagonalizable. If we set S =
[

1 1
i −i

]
, then

S−1 ∗ A ∗ S = D =
[

i 0
0 −i

]

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 46 / 68

Complex Eigenvalues and Eigenvectors

Examples

A vector v =
[

z1
z2

]
is an eigenvector of A corresponding to the eigenvector

i precisely when iz1 = z2. Similarly, it is an eigenvector of A corresponding
to the eigenvector −i precisely when −iz1 = z2. Thus

Ei (A) = span
{[

1
i

]}
, and E−i (A) = span

{[
1
−i

]}
.

The matrix A is an example of a real matrix which is not

real-diagonalizable, but is diagonalizable. If we set S =
[

1 1
i −i

]
, then

S−1 ∗ A ∗ S = D =
[

i 0
0 −i

]

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 46 / 68

Complex Eigenvalues and Eigenvectors

Examples

A vector v =
[

z1
z2

]
is an eigenvector of A corresponding to the eigenvector

i precisely when iz1 = z2. Similarly, it is an eigenvector of A corresponding
to the eigenvector −i precisely when −iz1 = z2. Thus

Ei (A) = span
{[

1
i

]}
, and E−i (A) = span

{[
1
−i

]}
.

The matrix A is an example of a real matrix which is not

real-diagonalizable, but is diagonalizable. If we set S =
[

1 1
i −i

]
, then

S−1 ∗ A ∗ S = D =
[

i 0
0 −i

]

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 46 / 68

Complex Eigenvalues and Eigenvectors

In general, we will say A is diagonalizable if it is so over C; this property
can be expressed in various equivalent ways, just as before in the real case.

Theorem
Given A ∈ Cn×n, the following statements are equivalent:

1 Cn has a basis consisting of eigenvectors of A.
2 Cn can be written as a direct sum of eigenspaces of A.
3 A is diagonalizable.

For example, with the matrix A =
[

0 1
−1 0

]
examined above, the two

eigenspaces combine to give a direct sum decomposition

C2 = Ei (A)⊕ E−i (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 47 / 68

Complex Eigenvalues and Eigenvectors

In general, we will say A is diagonalizable if it is so over C; this property
can be expressed in various equivalent ways, just as before in the real case.

Theorem
Given A ∈ Cn×n, the following statements are equivalent:

1 Cn has a basis consisting of eigenvectors of A.
2 Cn can be written as a direct sum of eigenspaces of A.
3 A is diagonalizable.

For example, with the matrix A =
[

0 1
−1 0

]
examined above, the two

eigenspaces combine to give a direct sum decomposition

C2 = Ei (A)⊕ E−i (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 47 / 68

Complex Eigenvalues and Eigenvectors

In general, we will say A is diagonalizable if it is so over C; this property
can be expressed in various equivalent ways, just as before in the real case.

Theorem
Given A ∈ Cn×n, the following statements are equivalent:

1 Cn has a basis consisting of eigenvectors of A.

2 Cn can be written as a direct sum of eigenspaces of A.
3 A is diagonalizable.

For example, with the matrix A =
[

0 1
−1 0

]
examined above, the two

eigenspaces combine to give a direct sum decomposition

C2 = Ei (A)⊕ E−i (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 47 / 68

Complex Eigenvalues and Eigenvectors

In general, we will say A is diagonalizable if it is so over C; this property
can be expressed in various equivalent ways, just as before in the real case.

Theorem
Given A ∈ Cn×n, the following statements are equivalent:

1 Cn has a basis consisting of eigenvectors of A.
2 Cn can be written as a direct sum of eigenspaces of A.

3 A is diagonalizable.

For example, with the matrix A =
[

0 1
−1 0

]
examined above, the two

eigenspaces combine to give a direct sum decomposition

C2 = Ei (A)⊕ E−i (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 47 / 68

Complex Eigenvalues and Eigenvectors

In general, we will say A is diagonalizable if it is so over C; this property
can be expressed in various equivalent ways, just as before in the real case.

Theorem
Given A ∈ Cn×n, the following statements are equivalent:

1 Cn has a basis consisting of eigenvectors of A.
2 Cn can be written as a direct sum of eigenspaces of A.
3 A is diagonalizable.

For example, with the matrix A =
[

0 1
−1 0

]
examined above, the two

eigenspaces combine to give a direct sum decomposition

C2 = Ei (A)⊕ E−i (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 47 / 68

Complex Eigenvalues and Eigenvectors

In general, we will say A is diagonalizable if it is so over C; this property
can be expressed in various equivalent ways, just as before in the real case.

Theorem
Given A ∈ Cn×n, the following statements are equivalent:

1 Cn has a basis consisting of eigenvectors of A.
2 Cn can be written as a direct sum of eigenspaces of A.
3 A is diagonalizable.

For example, with the matrix A =
[

0 1
−1 0

]
examined above, the two

eigenspaces combine to give a direct sum decomposition

C2 = Ei (A)⊕ E−i (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 47 / 68

Complex Eigenvalues and Eigenvectors

In general, we will say A is diagonalizable if it is so over C; this property
can be expressed in various equivalent ways, just as before in the real case.

Theorem
Given A ∈ Cn×n, the following statements are equivalent:

1 Cn has a basis consisting of eigenvectors of A.
2 Cn can be written as a direct sum of eigenspaces of A.
3 A is diagonalizable.

For example, with the matrix A =
[

0 1
−1 0

]
examined above, the two

eigenspaces combine to give a direct sum decomposition

C2 = Ei (A)⊕ E−i (A)

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 47 / 68

Complex Eigenvalues and Eigenvectors

On the other hand, for the matrix B =
[

1 1
0 1

]
with characteristic

polynomial pB(t) = (1− t)2 = (t − 1)(t − 1), the only eigenvalue is t = 1,
and working over C instead of R doesn’t change the picture in terms of
diagonalizability.

In order to better understand the conditions that can result in
non-diagonalizable matrices, we need to discuss multiplicity. This is done
next.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 48 / 68

Complex Eigenvalues and Eigenvectors

On the other hand, for the matrix B =
[

1 1
0 1

]
with characteristic

polynomial pB(t) = (1− t)2 = (t − 1)(t − 1), the only eigenvalue is t = 1,
and working over C instead of R doesn’t change the picture in terms of
diagonalizability.

In order to better understand the conditions that can result in
non-diagonalizable matrices, we need to discuss multiplicity. This is done
next.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 48 / 68

Algebraic vs Geometric Multiplicity

The identity matrix
[

1 0
0 1

]
is obviously diagonalizable (it is diagonal to

start with), and R2 has a basis consisting of eigenvectors of I, namely the
standard basis {e1, e2}.

On the other hand, we have seen that
[

1 1
0 1

]
is not diagonalizable, even

though it has the same characteristic polynomial as I. This example tells
us, among other things, that the characteristic polynomial alone does not
determine whether or not a given matrix is diagonalizable.

As it turns out, this problem can be studied one eigenvalue at a time.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 49 / 68

Algebraic vs Geometric Multiplicity

The identity matrix
[

1 0
0 1

]
is obviously diagonalizable (it is diagonal to

start with), and R2 has a basis consisting of eigenvectors of I, namely the
standard basis {e1, e2}.

On the other hand, we have seen that
[

1 1
0 1

]
is not diagonalizable, even

though it has the same characteristic polynomial as I.

This example tells
us, among other things, that the characteristic polynomial alone does not
determine whether or not a given matrix is diagonalizable.

As it turns out, this problem can be studied one eigenvalue at a time.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 49 / 68

Algebraic vs Geometric Multiplicity

The identity matrix
[

1 0
0 1

]
is obviously diagonalizable (it is diagonal to

start with), and R2 has a basis consisting of eigenvectors of I, namely the
standard basis {e1, e2}.

On the other hand, we have seen that
[

1 1
0 1

]
is not diagonalizable, even

though it has the same characteristic polynomial as I. This example tells
us, among other things, that the characteristic polynomial alone does not
determine whether or not a given matrix is diagonalizable.

As it turns out, this problem can be studied one eigenvalue at a time.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 49 / 68

Algebraic vs Geometric Multiplicity

The identity matrix
[

1 0
0 1

]
is obviously diagonalizable (it is diagonal to

start with), and R2 has a basis consisting of eigenvectors of I, namely the
standard basis {e1, e2}.

On the other hand, we have seen that
[

1 1
0 1

]
is not diagonalizable, even

though it has the same characteristic polynomial as I. This example tells
us, among other things, that the characteristic polynomial alone does not
determine whether or not a given matrix is diagonalizable.

As it turns out, this problem can be studied one eigenvalue at a time.

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 49 / 68

Algebraic vs Geometric Multiplicity

Let A be an arbitrary n × n matrix, and λ an eigenvalue of A. The
geometric multiplicity of λ is defined as

mg (λ) := Dim(Eλ(A))

while its algebraic multiplicity is the multiplicity of λ viewed as a root of
pA(t) (by definition this is the highest power of (t − λ) that divides into
pA(t) with zero remainder).

Theorem

For all square matrices A and eigenvalues λ, mg (λ) ≤ ma(λ).

Moreover,
this holds over both R and C (in other words, both for real matrices with
real eigenvalues, or more generally complex matrices with complex
eigenvalues).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 50 / 68

Algebraic vs Geometric Multiplicity

Let A be an arbitrary n × n matrix, and λ an eigenvalue of A. The
geometric multiplicity of λ is defined as

mg (λ) := Dim(Eλ(A))

while its algebraic multiplicity is the multiplicity of λ viewed as a root of
pA(t) (by definition this is the highest power of (t − λ) that divides into
pA(t) with zero remainder).

Theorem

For all square matrices A and eigenvalues λ, mg (λ) ≤ ma(λ).

Moreover,
this holds over both R and C (in other words, both for real matrices with
real eigenvalues, or more generally complex matrices with complex
eigenvalues).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 50 / 68

Algebraic vs Geometric Multiplicity

Let A be an arbitrary n × n matrix, and λ an eigenvalue of A. The
geometric multiplicity of λ is defined as

mg (λ) := Dim(Eλ(A))

while its algebraic multiplicity is the multiplicity of λ viewed as a root of
pA(t) (by definition this is the highest power of (t − λ) that divides into
pA(t) with zero remainder).

Theorem

For all square matrices A and eigenvalues λ, mg (λ) ≤ ma(λ).

Moreover,
this holds over both R and C (in other words, both for real matrices with
real eigenvalues, or more generally complex matrices with complex
eigenvalues).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 50 / 68

Algebraic vs Geometric Multiplicity

Let A be an arbitrary n × n matrix, and λ an eigenvalue of A. The
geometric multiplicity of λ is defined as

mg (λ) := Dim(Eλ(A))

while its algebraic multiplicity is the multiplicity of λ viewed as a root of
pA(t) (by definition this is the highest power of (t − λ) that divides into
pA(t) with zero remainder).

Theorem

For all square matrices A and eigenvalues λ, mg (λ) ≤ ma(λ).

Moreover,
this holds over both R and C (in other words, both for real matrices with
real eigenvalues, or more generally complex matrices with complex
eigenvalues).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 50 / 68

Algebraic vs Geometric Multiplicity

Let A be an arbitrary n × n matrix, and λ an eigenvalue of A. The
geometric multiplicity of λ is defined as

mg (λ) := Dim(Eλ(A))

while its algebraic multiplicity is the multiplicity of λ viewed as a root of
pA(t) (by definition this is the highest power of (t − λ) that divides into
pA(t) with zero remainder).

Theorem

For all square matrices A and eigenvalues λ, mg (λ) ≤ ma(λ). Moreover,
this holds over both R and C (in other words, both for real matrices with
real eigenvalues, or more generally complex matrices with complex
eigenvalues).

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 50 / 68

Algebraic vs Geometric Multiplicity

This theorem has an important consequence. Let λ1, . . . , λk denote the
(distinct) eigenvalues of the n × n matrix A. Then (working over C if
needed) we can write the characteristic polynomial of A as

pA(t) = (−1)n(t − λ1)m1(t − λ2)m2 . . . (t − λk)mk

where mi = ma(λi ) is the algebraic multiplicity of λi .

From this factorization, we see that the sum of the algebraic multiplicities
must equal the degree of pA(t), which equals n:

k∑
i=1

ma(λi ) =
k∑

i=1
mi = n

By the above theorem we have

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 51 / 68

Algebraic vs Geometric Multiplicity

This theorem has an important consequence. Let λ1, . . . , λk denote the
(distinct) eigenvalues of the n × n matrix A. Then (working over C if
needed) we can write the characteristic polynomial of A as

pA(t) = (−1)n(t − λ1)m1(t − λ2)m2 . . . (t − λk)mk

where mi = ma(λi ) is the algebraic multiplicity of λi .

From this factorization, we see that the sum of the algebraic multiplicities
must equal the degree of pA(t), which equals n:

k∑
i=1

ma(λi ) =
k∑

i=1
mi = n

By the above theorem we have

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 51 / 68

Algebraic vs Geometric Multiplicity

This theorem has an important consequence. Let λ1, . . . , λk denote the
(distinct) eigenvalues of the n × n matrix A. Then (working over C if
needed) we can write the characteristic polynomial of A as

pA(t) = (−1)n(t − λ1)m1(t − λ2)m2 . . . (t − λk)mk

where mi = ma(λi ) is the algebraic multiplicity of λi .

From this factorization, we see that the sum of the algebraic multiplicities
must equal the degree of pA(t), which equals n:

k∑
i=1

ma(λi ) =
k∑

i=1
mi = n

By the above theorem we have

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 51 / 68

Algebraic vs Geometric Multiplicity

This theorem has an important consequence. Let λ1, . . . , λk denote the
(distinct) eigenvalues of the n × n matrix A. Then (working over C if
needed) we can write the characteristic polynomial of A as

pA(t) = (−1)n(t − λ1)m1(t − λ2)m2 . . . (t − λk)mk

where mi = ma(λi ) is the algebraic multiplicity of λi .

From this factorization, we see that the sum of the algebraic multiplicities
must equal the degree of pA(t), which equals n:

k∑
i=1

ma(λi ) =
k∑

i=1
mi = n

By the above theorem we have

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 51 / 68

Algebraic vs Geometric Multiplicity

This theorem has an important consequence. Let λ1, . . . , λk denote the
(distinct) eigenvalues of the n × n matrix A. Then (working over C if
needed) we can write the characteristic polynomial of A as

pA(t) = (−1)n(t − λ1)m1(t − λ2)m2 . . . (t − λk)mk

where mi = ma(λi ) is the algebraic multiplicity of λi .

From this factorization, we see that the sum of the algebraic multiplicities
must equal the degree of pA(t), which equals n:

k∑
i=1

ma(λi ) =
k∑

i=1
mi = n

By the above theorem we have

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 51 / 68

Algebraic vs Geometric Multiplicity

This theorem has an important consequence. Let λ1, . . . , λk denote the
(distinct) eigenvalues of the n × n matrix A. Then (working over C if
needed) we can write the characteristic polynomial of A as

pA(t) = (−1)n(t − λ1)m1(t − λ2)m2 . . . (t − λk)mk

where mi = ma(λi ) is the algebraic multiplicity of λi .

From this factorization, we see that the sum of the algebraic multiplicities
must equal the degree of pA(t), which equals n:

k∑
i=1

ma(λi ) =
k∑

i=1
mi = n

By the above theorem we have

Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 51 / 68

Algebraic vs Geometric Multiplicity

Theorem
Over C the matrix A is diagonalizable iff for each eigenvalue λ one has
mg (λ) = ma(λ). If A is a real matrix and pA(t) factors over R into a
product of linear terms, then the same holds over R.

This leads to one of the central new ideas of this Module.

Defective matrices – definition
An n × n matrix A is said to be defective if the sum of the geometric
multiplicities over C is strictly less than n.

Our last theorem shows that this happens iff there is some eigenvalue
which is defective; that is, an eigenvalue λ of A for which mg (λ) < ma(λ). The defectiveness of a particular eigenvalue λ is then represented by the difference ma(λ)−mg (λ). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 52 / 68 Algebraic vs Geometric Multiplicity Theorem Over C the matrix A is diagonalizable iff for each eigenvalue λ one has mg (λ) = ma(λ). If A is a real matrix and pA(t) factors over R into a product of linear terms, then the same holds over R. This leads to one of the central new ideas of this Module. Defective matrices - definition An n × n matrix A is said to be defective if the sum of the geometric multiplicities over C is strictly less than n. Our last theorem shows that this happens iff there is some eigenvalue which is defective; that is, an eigenvalue λ of A for which mg (λ) < ma(λ). The defectiveness of a particular eigenvalue λ is then represented by the difference ma(λ)−mg (λ). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 52 / 68 Algebraic vs Geometric Multiplicity Theorem Over C the matrix A is diagonalizable iff for each eigenvalue λ one has mg (λ) = ma(λ). If A is a real matrix and pA(t) factors over R into a product of linear terms, then the same holds over R. This leads to one of the central new ideas of this Module. Defective matrices - definition An n × n matrix A is said to be defective if the sum of the geometric multiplicities over C is strictly less than n. Our last theorem shows that this happens iff there is some eigenvalue which is defective; that is, an eigenvalue λ of A for which mg (λ) < ma(λ). The defectiveness of a particular eigenvalue λ is then represented by the difference ma(λ)−mg (λ). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 52 / 68 Algebraic vs Geometric Multiplicity Theorem Over C the matrix A is diagonalizable iff for each eigenvalue λ one has mg (λ) = ma(λ). If A is a real matrix and pA(t) factors over R into a product of linear terms, then the same holds over R. This leads to one of the central new ideas of this Module. Defective matrices - definition An n × n matrix A is said to be defective if the sum of the geometric multiplicities over C is strictly less than n. Our last theorem shows that this happens iff there is some eigenvalue which is defective; that is, an eigenvalue λ of A for which mg (λ) < ma(λ). The defectiveness of a particular eigenvalue λ is then represented by the difference ma(λ)−mg (λ). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 52 / 68 Algebraic vs Geometric Multiplicity Theorem Over C the matrix A is diagonalizable iff for each eigenvalue λ one has mg (λ) = ma(λ). If A is a real matrix and pA(t) factors over R into a product of linear terms, then the same holds over R. This leads to one of the central new ideas of this Module. Defective matrices - definition An n × n matrix A is said to be defective if the sum of the geometric multiplicities over C is strictly less than n. Our last theorem shows that this happens iff there is some eigenvalue which is defective; that is, an eigenvalue λ of A for which mg (λ) < ma(λ). The defectiveness of a particular eigenvalue λ is then represented by the difference ma(λ)−mg (λ). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 52 / 68 Algebraic vs Geometric Multiplicity On the other hand, for almost all matrices (from a statistical point of view), factorization of pA(t) into linear terms leads to n distinct eigenvalues, which therefore must each have multiplicity equal to 1. Since the geometric multiplicity of a given eigenvalue must be at least 1 (as the corresponding eigenspace must be non-zero), we see that any eigenvalue with algebraic multiplicity 1 cannot be defective. In this case we must have ∑ i mg (λi ) = n, and so Theorem Any n × n matrix with n distinct eigenvalues is diagonalizable over C. If the matrix and its eigenvalues are all real, the same statement is true over R. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 53 / 68 Algebraic vs Geometric Multiplicity On the other hand, for almost all matrices (from a statistical point of view), factorization of pA(t) into linear terms leads to n distinct eigenvalues, which therefore must each have multiplicity equal to 1. Since the geometric multiplicity of a given eigenvalue must be at least 1 (as the corresponding eigenspace must be non-zero), we see that any eigenvalue with algebraic multiplicity 1 cannot be defective. In this case we must have ∑ i mg (λi ) = n, and so Theorem Any n × n matrix with n distinct eigenvalues is diagonalizable over C. If the matrix and its eigenvalues are all real, the same statement is true over R. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 53 / 68 Algebraic vs Geometric Multiplicity On the other hand, for almost all matrices (from a statistical point of view), factorization of pA(t) into linear terms leads to n distinct eigenvalues, which therefore must each have multiplicity equal to 1. Since the geometric multiplicity of a given eigenvalue must be at least 1 (as the corresponding eigenspace must be non-zero), we see that any eigenvalue with algebraic multiplicity 1 cannot be defective. In this case we must have ∑ i mg (λi ) = n, and so Theorem Any n × n matrix with n distinct eigenvalues is diagonalizable over C. If the matrix and its eigenvalues are all real, the same statement is true over R. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 53 / 68 Algebraic vs Geometric Multiplicity On the other hand, for almost all matrices (from a statistical point of view), factorization of pA(t) into linear terms leads to n distinct eigenvalues, which therefore must each have multiplicity equal to 1. Since the geometric multiplicity of a given eigenvalue must be at least 1 (as the corresponding eigenspace must be non-zero), we see that any eigenvalue with algebraic multiplicity 1 cannot be defective. In this case we must have ∑ i mg (λi ) = n, and so Theorem Any n × n matrix with n distinct eigenvalues is diagonalizable over C. If the matrix and its eigenvalues are all real, the same statement is true over R. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 53 / 68 Shur’s Theorem If A ∈ Cm×n is a complex m × n matrix, the conjugate-transpose of A is defined entrywise by A∗(i , j) := A(j , i) For complex matrices, this is the analogue of the transpose. A complex n × n matrix U is said to be unitary if U∗ = U−1, and Hermitian if U∗ = U (these are the complex analogues of orthogonal and symmetric, respectively). The proof of the following theorem involves definitions and methods developed in the next module; it is included here for completeness. The theorem will allow us to determine necessary and sufficient conditions for diagonalizability. Shur’s Theorem Let A be a complex n× n matrix. Then there is an n× n unitary matrix U and an n × n upper triangular matrix T such that U∗ ∗ A ∗ U = T Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 54 / 68 Shur’s Theorem If A ∈ Cm×n is a complex m × n matrix, the conjugate-transpose of A is defined entrywise by A∗(i , j) := A(j , i) For complex matrices, this is the analogue of the transpose. A complex n × n matrix U is said to be unitary if U∗ = U−1, and Hermitian if U∗ = U (these are the complex analogues of orthogonal and symmetric, respectively). The proof of the following theorem involves definitions and methods developed in the next module; it is included here for completeness. The theorem will allow us to determine necessary and sufficient conditions for diagonalizability. Shur’s Theorem Let A be a complex n× n matrix. Then there is an n× n unitary matrix U and an n × n upper triangular matrix T such that U∗ ∗ A ∗ U = T Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 54 / 68 Shur’s Theorem If A ∈ Cm×n is a complex m × n matrix, the conjugate-transpose of A is defined entrywise by A∗(i , j) := A(j , i) For complex matrices, this is the analogue of the transpose. A complex n × n matrix U is said to be unitary if U∗ = U−1, and Hermitian if U∗ = U (these are the complex analogues of orthogonal and symmetric, respectively). The proof of the following theorem involves definitions and methods developed in the next module; it is included here for completeness. The theorem will allow us to determine necessary and sufficient conditions for diagonalizability. Shur’s Theorem Let A be a complex n× n matrix. Then there is an n× n unitary matrix U and an n × n upper triangular matrix T such that U∗ ∗ A ∗ U = T Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 54 / 68 Shur’s Theorem If A ∈ Cm×n is a complex m × n matrix, the conjugate-transpose of A is defined entrywise by A∗(i , j) := A(j , i) For complex matrices, this is the analogue of the transpose. A complex n × n matrix U is said to be unitary if U∗ = U−1, and Hermitian if U∗ = U (these are the complex analogues of orthogonal and symmetric, respectively). The proof of the following theorem involves definitions and methods developed in the next module; it is included here for completeness. The theorem will allow us to determine necessary and sufficient conditions for diagonalizability. Shur’s Theorem Let A be a complex n× n matrix. Then there is an n× n unitary matrix U and an n × n upper triangular matrix T such that U∗ ∗ A ∗ U = T Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 54 / 68 Shur’s Theorem If A ∈ Cm×n is a complex m × n matrix, the conjugate-transpose of A is defined entrywise by A∗(i , j) := A(j , i) For complex matrices, this is the analogue of the transpose. A complex n × n matrix U is said to be unitary if U∗ = U−1, and Hermitian if U∗ = U (these are the complex analogues of orthogonal and symmetric, respectively). The proof of the following theorem involves definitions and methods developed in the next module; it is included here for completeness. The theorem will allow us to determine necessary and sufficient conditions for diagonalizability. Shur’s Theorem Let A be a complex n× n matrix. Then there is an n× n unitary matrix U and an n × n upper triangular matrix T such that U∗ ∗ A ∗ U = T Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 54 / 68 Shur’s Theorem The main corollary to Shur’s theorem is Theorem If A is an n × n Hermitian matrix, then A is diagonalizable (i.e., Cn has a basis consisting of eigenvectors of A). Moreover, all of the eigenvalues of A are real. As a special case, when A is a real n × n matrix, then it is Hermitian iff it is symmetric. Thus, viewing it as a complex matrix with purely real entries, we have Theorem If A is an n × n real symmetric matrix, then it is diagonalizable and all of its eigenvalues are real. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 55 / 68 Shur’s Theorem The main corollary to Shur’s theorem is Theorem If A is an n × n Hermitian matrix, then A is diagonalizable (i.e., Cn has a basis consisting of eigenvectors of A). Moreover, all of the eigenvalues of A are real. As a special case, when A is a real n × n matrix, then it is Hermitian iff it is symmetric. Thus, viewing it as a complex matrix with purely real entries, we have Theorem If A is an n × n real symmetric matrix, then it is diagonalizable and all of its eigenvalues are real. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 55 / 68 Shur’s Theorem The main corollary to Shur’s theorem is Theorem If A is an n × n Hermitian matrix, then A is diagonalizable (i.e., Cn has a basis consisting of eigenvectors of A). Moreover, all of the eigenvalues of A are real. As a special case, when A is a real n × n matrix, then it is Hermitian iff it is symmetric. Thus, viewing it as a complex matrix with purely real entries, we have Theorem If A is an n × n real symmetric matrix, then it is diagonalizable and all of its eigenvalues are real. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 55 / 68 Shur’s Theorem The main corollary to Shur’s theorem is Theorem If A is an n × n Hermitian matrix, then A is diagonalizable (i.e., Cn has a basis consisting of eigenvectors of A). Moreover, all of the eigenvalues of A are real. As a special case, when A is a real n × n matrix, then it is Hermitian iff it is symmetric. Thus, viewing it as a complex matrix with purely real entries, we have Theorem If A is an n × n real symmetric matrix, then it is diagonalizable and all of its eigenvalues are real. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 55 / 68 Shur’s Theorem A Hermitian matrix is said to be positive definite if all of its eignevalues are positive real numbers, and non-negative definite, if all of its eigenvalues are non-negative. Now suppose A is an arbitrary n × n complex matrix, and B = A∗ ∗ A. Then clearly B∗ = (A∗ ∗ A)∗ = A∗ ∗ A = B implying B is Hermitian. Theorem For any n × n matrix A, the matrix B = A∗ ∗ A is non-negative definite. Moreover, B is positive definite iff A is non-singular. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 56 / 68 Shur’s Theorem A Hermitian matrix is said to be positive definite if all of its eignevalues are positive real numbers, and non-negative definite, if all of its eigenvalues are non-negative. Now suppose A is an arbitrary n × n complex matrix, and B = A∗ ∗ A. Then clearly B∗ = (A∗ ∗ A)∗ = A∗ ∗ A = B implying B is Hermitian. Theorem For any n × n matrix A, the matrix B = A∗ ∗ A is non-negative definite. Moreover, B is positive definite iff A is non-singular. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 56 / 68 Shur’s Theorem A Hermitian matrix is said to be positive definite if all of its eignevalues are positive real numbers, and non-negative definite, if all of its eigenvalues are non-negative. Now suppose A is an arbitrary n × n complex matrix, and B = A∗ ∗ A. Then clearly B∗ = (A∗ ∗ A)∗ = A∗ ∗ A = B implying B is Hermitian. Theorem For any n × n matrix A, the matrix B = A∗ ∗ A is non-negative definite. Moreover, B is positive definite iff A is non-singular. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 56 / 68 Shur’s Theorem A Hermitian matrix is said to be positive definite if all of its eignevalues are positive real numbers, and non-negative definite, if all of its eigenvalues are non-negative. Now suppose A is an arbitrary n × n complex matrix, and B = A∗ ∗ A. Then clearly B∗ = (A∗ ∗ A)∗ = A∗ ∗ A = B implying B is Hermitian. Theorem For any n × n matrix A, the matrix B = A∗ ∗ A is non-negative definite. Moreover, B is positive definite iff A is non-singular. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 56 / 68 Normal Matrices The identity U∗ ∗ A ∗ U = T can be expressed as saying that A is unitarily similar to the triangular matrix T (in other words, the similarity matrix is not just invertible, but unitary). It is reasonable to ask whether of not a complex matrix A is unitarily similar to a diagonal matrix, or alternatively, whether or not Cn admits an orthonormal basis consisting of eigenvectors of A. Definition A is normal if A∗ ∗ A = A ∗ A∗. Theorem A is normal iff A is unitarily diagonalizable. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 57 / 68 Normal Matrices The identity U∗ ∗ A ∗ U = T can be expressed as saying that A is unitarily similar to the triangular matrix T (in other words, the similarity matrix is not just invertible, but unitary). It is reasonable to ask whether of not a complex matrix A is unitarily similar to a diagonal matrix, or alternatively, whether or not Cn admits an orthonormal basis consisting of eigenvectors of A. Definition A is normal if A∗ ∗ A = A ∗ A∗. Theorem A is normal iff A is unitarily diagonalizable. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 57 / 68 Normal Matrices The identity U∗ ∗ A ∗ U = T can be expressed as saying that A is unitarily similar to the triangular matrix T (in other words, the similarity matrix is not just invertible, but unitary). It is reasonable to ask whether of not a complex matrix A is unitarily similar to a diagonal matrix, or alternatively, whether or not Cn admits an orthonormal basis consisting of eigenvectors of A. Definition A is normal if A∗ ∗ A = A ∗ A∗. Theorem A is normal iff A is unitarily diagonalizable. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 57 / 68 Normal Matrices The identity U∗ ∗ A ∗ U = T can be expressed as saying that A is unitarily similar to the triangular matrix T (in other words, the similarity matrix is not just invertible, but unitary). It is reasonable to ask whether of not a complex matrix A is unitarily similar to a diagonal matrix, or alternatively, whether or not Cn admits an orthonormal basis consisting of eigenvectors of A. Definition A is normal if A∗ ∗ A = A ∗ A∗. Theorem A is normal iff A is unitarily diagonalizable. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 57 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors In this section we assume all vector spaces and matrices are complex. As we have already seen, the matrix A = [ 1 1 0 1 ] is not diagonalizable. In fact, pA(t) = (1− t)2, indicating that the single eigenvalue λ = 1 has algebraic multiplicity ma(1) = 2, while mg (1) = Dim(E1(A)) = Null(A− 1 · Id) = Null ([ 0 1 0 0 ]) = 1 implying that A is defective. However this is not the end of the story. Letting B = (A− 1 · Id), we see that B2 = B ∗ B = 02×2 is the zero matrix. Moreover, e1 = B ∗ e2, where E1(A) = Span{e1}. We then see that e2 is not an eigenvector of A, but B ∗ e2 is. There is an inclusion Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 58 / 68 Generalized Eigenvectors C ∼= E1(A) = N(B) ⊂ N(B2) = C2 In this example, the vector e2 is referred to as a generalized eigenvector of the matrix A; it satisfies the property that the vector itself is not necessarily an eigenvector of A, but Bk ∗ e2 is for some k ≥ 1. One other observation worth noting: in this example, the smallest exponent m of B satisfying the property N(Bm) = N(Bm+1) is m = ma(1) = 2, the algebraic multiplicity of the eigenvalue λ = 1. This is not an accident. Theorem If A is an n × n numerical matrix and λ is an eigenvalue of A, then Null ( (A− λI)ma(λ) ) = ma(λ) Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 59 / 68 Generalized Eigenvectors C ∼= E1(A) = N(B) ⊂ N(B2) = C2 In this example, the vector e2 is referred to as a generalized eigenvector of the matrix A; it satisfies the property that the vector itself is not necessarily an eigenvector of A, but Bk ∗ e2 is for some k ≥ 1. One other observation worth noting: in this example, the smallest exponent m of B satisfying the property N(Bm) = N(Bm+1) is m = ma(1) = 2, the algebraic multiplicity of the eigenvalue λ = 1. This is not an accident. Theorem If A is an n × n numerical matrix and λ is an eigenvalue of A, then Null ( (A− λI)ma(λ) ) = ma(λ) Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 59 / 68 Generalized Eigenvectors C ∼= E1(A) = N(B) ⊂ N(B2) = C2 In this example, the vector e2 is referred to as a generalized eigenvector of the matrix A; it satisfies the property that the vector itself is not necessarily an eigenvector of A, but Bk ∗ e2 is for some k ≥ 1. One other observation worth noting: in this example, the smallest exponent m of B satisfying the property N(Bm) = N(Bm+1) is m = ma(1) = 2, the algebraic multiplicity of the eigenvalue λ = 1. This is not an accident. Theorem If A is an n × n numerical matrix and λ is an eigenvalue of A, then Null ( (A− λI)ma(λ) ) = ma(λ) Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 59 / 68 Generalized Eigenvectors C ∼= E1(A) = N(B) ⊂ N(B2) = C2 In this example, the vector e2 is referred to as a generalized eigenvector of the matrix A; it satisfies the property that the vector itself is not necessarily an eigenvector of A, but Bk ∗ e2 is for some k ≥ 1. One other observation worth noting: in this example, the smallest exponent m of B satisfying the property N(Bm) = N(Bm+1) is m = ma(1) = 2, the algebraic multiplicity of the eigenvalue λ = 1. This is not an accident. Theorem If A is an n × n numerical matrix and λ is an eigenvalue of A, then Null ( (A− λI)ma(λ) ) = ma(λ) Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 59 / 68 Generalized Eigenvectors C ∼= E1(A) = N(B) ⊂ N(B2) = C2 In this example, the vector e2 is referred to as a generalized eigenvector of the matrix A; it satisfies the property that the vector itself is not necessarily an eigenvector of A, but Bk ∗ e2 is for some k ≥ 1. One other observation worth noting: in this example, the smallest exponent m of B satisfying the property N(Bm) = N(Bm+1) is m = ma(1) = 2, the algebraic multiplicity of the eigenvalue λ = 1. This is not an accident. Theorem If A is an n × n numerical matrix and λ is an eigenvalue of A, then Null ( (A− λI)ma(λ) ) = ma(λ) Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 59 / 68 Generalized Eigenvectors C ∼= E1(A) = N(B) ⊂ N(B2) = C2 In this example, the vector e2 is referred to as a generalized eigenvector of the matrix A; it satisfies the property that the vector itself is not necessarily an eigenvector of A, but Bk ∗ e2 is for some k ≥ 1. One other observation worth noting: in this example, the smallest exponent m of B satisfying the property N(Bm) = N(Bm+1) is m = ma(1) = 2, the algebraic multiplicity of the eigenvalue λ = 1. This is not an accident. Theorem If A is an n × n numerical matrix and λ is an eigenvalue of A, then Null ( (A− λI)ma(λ) ) = ma(λ) Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 59 / 68 Generalized Eigenvectors C ∼= E1(A) = N(B) ⊂ N(B2) = C2 In this example, the vector e2 is referred to as a generalized eigenvector of the matrix A; it satisfies the property that the vector itself is not necessarily an eigenvector of A, but Bk ∗ e2 is for some k ≥ 1. One other observation worth noting: in this example, the smallest exponent m of B satisfying the property N(Bm) = N(Bm+1) is m = ma(1) = 2, the algebraic multiplicity of the eigenvalue λ = 1. This is not an accident. Theorem If A is an n × n numerical matrix and λ is an eigenvalue of A, then Null ( (A− λI)ma(λ) ) = ma(λ) Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 59 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E gλ1(A)⊕ E g λ2 (A)⊕ · · · ⊕ E gλk (A) = C n Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors Definition The generalized eigenspace of λ (for the matrix A) is the space E gλ (A) := N ( (A− λI)ma(λ) ) . A non-zero element of E gλ (A) is referred to as a generalized eigenvector of A. Letting E kλ (A) := N ( (A− λI)k ) , we have a sequence of inclusions Eλ(A) = E 1λ(A) ⊂ E 2 λ(A) ⊂ · · · ⊂ E ma(λ) λ = E g λ (A) Theorem If λ1 6= λ2 6= · · · 6= λk are the distinct eigenvalues of an n × n matrix A then E gλ1(A) + E g λ2 (A) + · · ·+ E gλk (A) = E g λ1 (A)⊕ E gλ2(A)⊕ · · · ⊕ E g λk (A) = Cn Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 60 / 68 Generalized Eigenvectors The generalized eigenvalue problem is to find a basis Sgλ for each generalized eigenspace compatible with this filtration. This means that for each k, the vectors of Sgλ lying in E k λ (A) is a basis for that subspace. This turns out to be more involved than the earlier problem of finding a basis for Eλ(A) = E 1λ(A), and an algorithm for finding such a basis will be deferred until Module IV. One thing that can often be done, however, is to find a Jordan chain. We will first need to define some terminology. Definition - Rank of a generalized eigenvector If v ∈ E gλ (A) is a generalized eigenvector of A, the rank of v is the unique integer m ≥ 1 for which (A− λI)m ∗ v = 0, (A− λ)m−1 ∗ v 6= 0. By the above Theorem, such an m always exists. A generalized eigenvector of A, then, is an eigenvector of A iff its rank equals 1. For an eigenvalue λ of A, we will abbreviate (A− λI) as Aλ. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 61 / 68 Generalized Eigenvectors The generalized eigenvalue problem is to find a basis Sgλ for each generalized eigenspace compatible with this filtration. This means that for each k, the vectors of Sgλ lying in E k λ (A) is a basis for that subspace. This turns out to be more involved than the earlier problem of finding a basis for Eλ(A) = E 1λ(A), and an algorithm for finding such a basis will be deferred until Module IV. One thing that can often be done, however, is to find a Jordan chain. We will first need to define some terminology. Definition - Rank of a generalized eigenvector If v ∈ E gλ (A) is a generalized eigenvector of A, the rank of v is the unique integer m ≥ 1 for which (A− λI)m ∗ v = 0, (A− λ)m−1 ∗ v 6= 0. By the above Theorem, such an m always exists. A generalized eigenvector of A, then, is an eigenvector of A iff its rank equals 1. For an eigenvalue λ of A, we will abbreviate (A− λI) as Aλ. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 61 / 68 Generalized Eigenvectors The generalized eigenvalue problem is to find a basis Sgλ for each generalized eigenspace compatible with this filtration. This means that for each k, the vectors of Sgλ lying in E k λ (A) is a basis for that subspace. This turns out to be more involved than the earlier problem of finding a basis for Eλ(A) = E 1λ(A), and an algorithm for finding such a basis will be deferred until Module IV. One thing that can often be done, however, is to find a Jordan chain. We will first need to define some terminology. Definition - Rank of a generalized eigenvector If v ∈ E gλ (A) is a generalized eigenvector of A, the rank of v is the unique integer m ≥ 1 for which (A− λI)m ∗ v = 0, (A− λ)m−1 ∗ v 6= 0. By the above Theorem, such an m always exists. A generalized eigenvector of A, then, is an eigenvector of A iff its rank equals 1. For an eigenvalue λ of A, we will abbreviate (A− λI) as Aλ. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 61 / 68 Generalized Eigenvectors The generalized eigenvalue problem is to find a basis Sgλ for each generalized eigenspace compatible with this filtration. This means that for each k, the vectors of Sgλ lying in E k λ (A) is a basis for that subspace. This turns out to be more involved than the earlier problem of finding a basis for Eλ(A) = E 1λ(A), and an algorithm for finding such a basis will be deferred until Module IV. One thing that can often be done, however, is to find a Jordan chain. We will first need to define some terminology. Definition - Rank of a generalized eigenvector If v ∈ E gλ (A) is a generalized eigenvector of A, the rank of v is the unique integer m ≥ 1 for which (A− λI)m ∗ v = 0, (A− λ)m−1 ∗ v 6= 0. By the above Theorem, such an m always exists. A generalized eigenvector of A, then, is an eigenvector of A iff its rank equals 1. For an eigenvalue λ of A, we will abbreviate (A− λI) as Aλ. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 61 / 68 Generalized Eigenvectors The generalized eigenvalue problem is to find a basis Sgλ for each generalized eigenspace compatible with this filtration. This means that for each k, the vectors of Sgλ lying in E k λ (A) is a basis for that subspace. This turns out to be more involved than the earlier problem of finding a basis for Eλ(A) = E 1λ(A), and an algorithm for finding such a basis will be deferred until Module IV. One thing that can often be done, however, is to find a Jordan chain. We will first need to define some terminology. Definition - Rank of a generalized eigenvector If v ∈ E gλ (A) is a generalized eigenvector of A, the rank of v is the unique integer m ≥ 1 for which (A− λI)m ∗ v = 0, (A− λ)m−1 ∗ v 6= 0. By the above Theorem, such an m always exists. A generalized eigenvector of A, then, is an eigenvector of A iff its rank equals 1. For an eigenvalue λ of A, we will abbreviate (A− λI) as Aλ. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 61 / 68 Generalized Eigenvectors The generalized eigenvalue problem is to find a basis Sgλ for each generalized eigenspace compatible with this filtration. This means that for each k, the vectors of Sgλ lying in E k λ (A) is a basis for that subspace. This turns out to be more involved than the earlier problem of finding a basis for Eλ(A) = E 1λ(A), and an algorithm for finding such a basis will be deferred until Module IV. One thing that can often be done, however, is to find a Jordan chain. We will first need to define some terminology. Definition - Rank of a generalized eigenvector If v ∈ E gλ (A) is a generalized eigenvector of A, the rank of v is the unique integer m ≥ 1 for which (A− λI)m ∗ v = 0, (A− λ)m−1 ∗ v 6= 0. By the above Theorem, such an m always exists. A generalized eigenvector of A, then, is an eigenvector of A iff its rank equals 1. For an eigenvalue λ of A, we will abbreviate (A− λI) as Aλ. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 61 / 68 Generalized Eigenvectors Definition - Jordan Chains Given a generalized eigenvector vm of A of rank m, the Jordan chain associated to vm is the sequence of vectors J(vm) := {vm, vm−1, vm−2, . . . , v1} where vm−i := Aiλ ∗ vm By definition of rank, it is easy to see that every vector in a Jordan chain must be non-zero. In fact, more is true Theorem If vm is a generalized eigenvector of A of rank m (corresponding to the eigenvalue λ), then the Jordan chain J(vm) corresponding to vm consists of m linearly independent eigenvectors. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 62 / 68 Generalized Eigenvectors Definition - Jordan Chains Given a generalized eigenvector vm of A of rank m, the Jordan chain associated to vm is the sequence of vectors J(vm) := {vm, vm−1, vm−2, . . . , v1} where vm−i := Aiλ ∗ vm By definition of rank, it is easy to see that every vector in a Jordan chain must be non-zero. In fact, more is true Theorem If vm is a generalized eigenvector of A of rank m (corresponding to the eigenvalue λ), then the Jordan chain J(vm) corresponding to vm consists of m linearly independent eigenvectors. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 62 / 68 Generalized Eigenvectors Definition - Jordan Chains Given a generalized eigenvector vm of A of rank m, the Jordan chain associated to vm is the sequence of vectors J(vm) := {vm, vm−1, vm−2, . . . , v1} where vm−i := Aiλ ∗ vm By definition of rank, it is easy to see that every vector in a Jordan chain must be non-zero. In fact, more is true Theorem If vm is a generalized eigenvector of A of rank m (corresponding to the eigenvalue λ), then the Jordan chain J(vm) corresponding to vm consists of m linearly independent eigenvectors. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 62 / 68 Generalized Eigenvectors Definition - Jordan Chains Given a generalized eigenvector vm of A of rank m, the Jordan chain associated to vm is the sequence of vectors J(vm) := {vm, vm−1, vm−2, . . . , v1} where vm−i := Aiλ ∗ vm By definition of rank, it is easy to see that every vector in a Jordan chain must be non-zero. In fact, more is true Theorem If vm is a generalized eigenvector of A of rank m (corresponding to the eigenvalue λ), then the Jordan chain J(vm) corresponding to vm consists of m linearly independent eigenvectors. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 62 / 68 Generalized Eigenvectors Definition - Jordan Chains Given a generalized eigenvector vm of A of rank m, the Jordan chain associated to vm is the sequence of vectors J(vm) := {vm, vm−1, vm−2, . . . , v1} where vm−i := Aiλ ∗ vm By definition of rank, it is easy to see that every vector in a Jordan chain must be non-zero. In fact, more is true Theorem If vm is a generalized eigenvector of A of rank m (corresponding to the eigenvalue λ), then the Jordan chain J(vm) corresponding to vm consists of m linearly independent eigenvectors. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 62 / 68 Generalized Eigenvectors Definition - Jordan Chains Given a generalized eigenvector vm of A of rank m, the Jordan chain associated to vm is the sequence of vectors J(vm) := {vm, vm−1, vm−2, . . . , v1} where vm−i := Aiλ ∗ vm By definition of rank, it is easy to see that every vector in a Jordan chain must be non-zero. In fact, more is true Theorem If vm is a generalized eigenvector of A of rank m (corresponding to the eigenvalue λ), then the Jordan chain J(vm) corresponding to vm consists of m linearly independent eigenvectors. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 62 / 68 Generalized Eigenvectors Definition - Jordan Chains Given a generalized eigenvector vm of A of rank m, the Jordan chain associated to vm is the sequence of vectors J(vm) := {vm, vm−1, vm−2, . . . , v1} where vm−i := Aiλ ∗ vm By definition of rank, it is easy to see that every vector in a Jordan chain must be non-zero. In fact, more is true Theorem If vm is a generalized eigenvector of A of rank m (corresponding to the eigenvalue λ), then the Jordan chain J(vm) corresponding to vm consists of m linearly independent eigenvectors. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 62 / 68 Generalized Eigenvectors In this way, a rank m generalized eigenvector of A vm (corresponding to the eigenvalue λ) will generate an m-dimensional subspace Span(J(vm)) of the generalized eigenspace E gλ (A) with basis given by the Jordan chain J(vm) associated with vm. Definition A Jordan canonical basis for E gλ (A) is one that consists entirely of Jordan chains. Theorem The generalized egienspace E gλ (A) will always admit a Jordan canonical basis Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 63 / 68 Generalized Eigenvectors In this way, a rank m generalized eigenvector of A vm (corresponding to the eigenvalue λ) will generate an m-dimensional subspace Span(J(vm)) of the generalized eigenspace E gλ (A) with basis given by the Jordan chain J(vm) associated with vm. Definition A Jordan canonical basis for E gλ (A) is one that consists entirely of Jordan chains. Theorem The generalized egienspace E gλ (A) will always admit a Jordan canonical basis Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 63 / 68 Generalized Eigenvectors In this way, a rank m generalized eigenvector of A vm (corresponding to the eigenvalue λ) will generate an m-dimensional subspace Span(J(vm)) of the generalized eigenspace E gλ (A) with basis given by the Jordan chain J(vm) associated with vm. Definition A Jordan canonical basis for E gλ (A) is one that consists entirely of Jordan chains. Theorem The generalized egienspace E gλ (A) will always admit a Jordan canonical basis Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 63 / 68 Generalized Eigenvectors Examples Suppose λ is a non-defective eigenvalue of A; that is, one with mg (λ) = ma(λ). Then E gλ (A) = Eλ(A) - every generalized eigenvector of A has rank 1. Therefore every Jordan chain has length 1, and the number of Jordan chains we would need to find a basis for E gλ (A) is mg (A) = ma(A). Examples Let A = [ 1 1 0 1 ] , as above. Then λ = 1 is defective, as 1 = mg (1) < ma(1) = 2. We also saw that A1 ∗ e2 = e1, where e1 is an eigenvector of A corresponding to the eigenvalue λ = 1. Thus e2 is a generalized eigenvector of A of rank 2, and the Jordan chain {e2, e1} is a basis for E g 1 (A) = C 2 (in fact, it is the standard basis). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 64 / 68 Generalized Eigenvectors Examples Suppose λ is a non-defective eigenvalue of A; that is, one with mg (λ) = ma(λ). Then E g λ (A) = Eλ(A) - every generalized eigenvector of A has rank 1. Therefore every Jordan chain has length 1, and the number of Jordan chains we would need to find a basis for E gλ (A) is mg (A) = ma(A). Examples Let A = [ 1 1 0 1 ] , as above. Then λ = 1 is defective, as 1 = mg (1) < ma(1) = 2. We also saw that A1 ∗ e2 = e1, where e1 is an eigenvector of A corresponding to the eigenvalue λ = 1. Thus e2 is a generalized eigenvector of A of rank 2, and the Jordan chain {e2, e1} is a basis for E g 1 (A) = C 2 (in fact, it is the standard basis). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 64 / 68 Generalized Eigenvectors Examples Suppose λ is a non-defective eigenvalue of A; that is, one with mg (λ) = ma(λ). Then E g λ (A) = Eλ(A) - every generalized eigenvector of A has rank 1. Therefore every Jordan chain has length 1, and the number of Jordan chains we would need to find a basis for E gλ (A) is mg (A) = ma(A). Examples Let A = [ 1 1 0 1 ] , as above. Then λ = 1 is defective, as 1 = mg (1) < ma(1) = 2. We also saw that A1 ∗ e2 = e1, where e1 is an eigenvector of A corresponding to the eigenvalue λ = 1. Thus e2 is a generalized eigenvector of A of rank 2, and the Jordan chain {e2, e1} is a basis for E g 1 (A) = C 2 (in fact, it is the standard basis). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 64 / 68 Generalized Eigenvectors Examples Suppose λ is a non-defective eigenvalue of A; that is, one with mg (λ) = ma(λ). Then E g λ (A) = Eλ(A) - every generalized eigenvector of A has rank 1. Therefore every Jordan chain has length 1, and the number of Jordan chains we would need to find a basis for E gλ (A) is mg (A) = ma(A). Examples Let A = [ 1 1 0 1 ] , as above. Then λ = 1 is defective, as 1 = mg (1) < ma(1) = 2. We also saw that A1 ∗ e2 = e1, where e1 is an eigenvector of A corresponding to the eigenvalue λ = 1. Thus e2 is a generalized eigenvector of A of rank 2, and the Jordan chain {e2, e1} is a basis for E g 1 (A) = C 2 (in fact, it is the standard basis). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 64 / 68 Generalized Eigenvectors Examples Suppose λ is a non-defective eigenvalue of A; that is, one with mg (λ) = ma(λ). Then E g λ (A) = Eλ(A) - every generalized eigenvector of A has rank 1. Therefore every Jordan chain has length 1, and the number of Jordan chains we would need to find a basis for E gλ (A) is mg (A) = ma(A). Examples Let A = [ 1 1 0 1 ] , as above. Then λ = 1 is defective, as 1 = mg (1) < ma(1) = 2. We also saw that A1 ∗ e2 = e1, where e1 is an eigenvector of A corresponding to the eigenvalue λ = 1. Thus e2 is a generalized eigenvector of A of rank 2, and the Jordan chain {e2, e1} is a basis for E g 1 (A) = C 2 (in fact, it is the standard basis). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 64 / 68 Generalized Eigenvectors Examples Suppose λ is a non-defective eigenvalue of A; that is, one with mg (λ) = ma(λ). Then E g λ (A) = Eλ(A) - every generalized eigenvector of A has rank 1. Therefore every Jordan chain has length 1, and the number of Jordan chains we would need to find a basis for E gλ (A) is mg (A) = ma(A). Examples Let A = [ 1 1 0 1 ] , as above. Then λ = 1 is defective, as 1 = mg (1) < ma(1) = 2. We also saw that A1 ∗ e2 = e1, where e1 is an eigenvector of A corresponding to the eigenvalue λ = 1. Thus e2 is a generalized eigenvector of A of rank 2, and the Jordan chain {e2, e1} is a basis for E g 1 (A) = C 2 (in fact, it is the standard basis). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 64 / 68 Generalized Eigenvectors Examples let A =  2 1 00 2 1 0 0 2  . In this case, it is easy to see that the characteristic polynomial is pA(t) = (2− t)3, so that the only eigenvalue is λ = 2. Also m1(2) = 3, and one can verify that mg (2) = 1. Thus there is a gap of two between the dimension of the generalized eigenspace E g2 (A) = C 3, and that of the regular eigenspace E2(A). Note also that e1 is an eigenvector for A. Can we find a Jordan chain which provides a basis for the generalized eigenspace E g2 (A), which we see is all of C 3? If a single Jordan chain is going to do the job, it must have length 3, and therefore be the Jordan chain associated to a generalized eigenvector of rank 3. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 65 / 68 Generalized Eigenvectors Examples let A =  2 1 00 2 1 0 0 2  . In this case, it is easy to see that the characteristic polynomial is pA(t) = (2− t)3, so that the only eigenvalue is λ = 2. Also m1(2) = 3, and one can verify that mg (2) = 1. Thus there is a gap of two between the dimension of the generalized eigenspace E g2 (A) = C 3, and that of the regular eigenspace E2(A). Note also that e1 is an eigenvector for A. Can we find a Jordan chain which provides a basis for the generalized eigenspace E g2 (A), which we see is all of C 3? If a single Jordan chain is going to do the job, it must have length 3, and therefore be the Jordan chain associated to a generalized eigenvector of rank 3. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 65 / 68 Generalized Eigenvectors Examples let A =  2 1 00 2 1 0 0 2  . In this case, it is easy to see that the characteristic polynomial is pA(t) = (2− t)3, so that the only eigenvalue is λ = 2. Also m1(2) = 3, and one can verify that mg (2) = 1. Thus there is a gap of two between the dimension of the generalized eigenspace E g2 (A) = C 3, and that of the regular eigenspace E2(A). Note also that e1 is an eigenvector for A. Can we find a Jordan chain which provides a basis for the generalized eigenspace E g2 (A), which we see is all of C 3? If a single Jordan chain is going to do the job, it must have length 3, and therefore be the Jordan chain associated to a generalized eigenvector of rank 3. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 65 / 68 Generalized Eigenvectors Examples let A =  2 1 00 2 1 0 0 2  . In this case, it is easy to see that the characteristic polynomial is pA(t) = (2− t)3, so that the only eigenvalue is λ = 2. Also m1(2) = 3, and one can verify that mg (2) = 1. Thus there is a gap of two between the dimension of the generalized eigenspace E g2 (A) = C 3, and that of the regular eigenspace E2(A). Note also that e1 is an eigenvector for A. Can we find a Jordan chain which provides a basis for the generalized eigenspace E g2 (A), which we see is all of C 3? If a single Jordan chain is going to do the job, it must have length 3, and therefore be the Jordan chain associated to a generalized eigenvector of rank 3. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 65 / 68 Generalized Eigenvectors Examples let A =  2 1 00 2 1 0 0 2  . In this case, it is easy to see that the characteristic polynomial is pA(t) = (2− t)3, so that the only eigenvalue is λ = 2. Also m1(2) = 3, and one can verify that mg (2) = 1. Thus there is a gap of two between the dimension of the generalized eigenspace E g2 (A) = C 3, and that of the regular eigenspace E2(A). Note also that e1 is an eigenvector for A. Can we find a Jordan chain which provides a basis for the generalized eigenspace E g2 (A), which we see is all of C 3? If a single Jordan chain is going to do the job, it must have length 3, and therefore be the Jordan chain associated to a generalized eigenvector of rank 3. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 65 / 68 Generalized Eigenvectors Examples let A =  2 1 00 2 1 0 0 2  . In this case, it is easy to see that the characteristic polynomial is pA(t) = (2− t)3, so that the only eigenvalue is λ = 2. Also m1(2) = 3, and one can verify that mg (2) = 1. Thus there is a gap of two between the dimension of the generalized eigenspace E g2 (A) = C 3, and that of the regular eigenspace E2(A). Note also that e1 is an eigenvector for A. Can we find a Jordan chain which provides a basis for the generalized eigenspace E g2 (A), which we see is all of C 3? If a single Jordan chain is going to do the job, it must have length 3, and therefore be the Jordan chain associated to a generalized eigenvector of rank 3. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 65 / 68 Generalized Eigenvectors Examples Now A2 = A− 2Id =  0 1 00 0 1 0 0 0  , A22 =  0 0 10 0 0 0 0 0  , with A32 = 03×3. We are then looking for a vector v ∈ C3 with A32 ∗ v = 0 (which is automatically the case), but with A22 ∗ v 6= 0. We see that this last condition is satisfied iff the third coordinate of v is non-zero. There is clearly a choice involved. The simplest choice here is to take v = v3 = e3. Then v2 = A2 ∗ v3 = e2, and v1 = A2 ∗ v2 = e1. So in this case we see J(e3) = {e3, e2, e1} Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 66 / 68 Generalized Eigenvectors Examples Now A2 = A− 2Id =  0 1 00 0 1 0 0 0  , A22 =  0 0 10 0 0 0 0 0  , with A32 = 03×3. We are then looking for a vector v ∈ C3 with A32 ∗ v = 0 (which is automatically the case), but with A22 ∗ v 6= 0. We see that this last condition is satisfied iff the third coordinate of v is non-zero. There is clearly a choice involved. The simplest choice here is to take v = v3 = e3. Then v2 = A2 ∗ v3 = e2, and v1 = A2 ∗ v2 = e1. So in this case we see J(e3) = {e3, e2, e1} Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 66 / 68 Generalized Eigenvectors Examples Now A2 = A− 2Id =  0 1 00 0 1 0 0 0  , A22 =  0 0 10 0 0 0 0 0  , with A32 = 03×3. We are then looking for a vector v ∈ C3 with A32 ∗ v = 0 (which is automatically the case), but with A22 ∗ v 6= 0. We see that this last condition is satisfied iff the third coordinate of v is non-zero. There is clearly a choice involved. The simplest choice here is to take v = v3 = e3. Then v2 = A2 ∗ v3 = e2, and v1 = A2 ∗ v2 = e1. So in this case we see J(e3) = {e3, e2, e1} Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 66 / 68 Generalized Eigenvectors Examples Now A2 = A− 2Id =  0 1 00 0 1 0 0 0  , A22 =  0 0 10 0 0 0 0 0  , with A32 = 03×3. We are then looking for a vector v ∈ C3 with A32 ∗ v = 0 (which is automatically the case), but with A22 ∗ v 6= 0. We see that this last condition is satisfied iff the third coordinate of v is non-zero. There is clearly a choice involved. The simplest choice here is to take v = v3 = e3. Then v2 = A2 ∗ v3 = e2, and v1 = A2 ∗ v2 = e1. So in this case we see J(e3) = {e3, e2, e1} Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 66 / 68 Generalized Eigenvectors Examples Now A2 = A− 2Id =  0 1 00 0 1 0 0 0  , A22 =  0 0 10 0 0 0 0 0  , with A32 = 03×3. We are then looking for a vector v ∈ C3 with A32 ∗ v = 0 (which is automatically the case), but with A22 ∗ v 6= 0. We see that this last condition is satisfied iff the third coordinate of v is non-zero. There is clearly a choice involved. The simplest choice here is to take v = v3 = e3. Then v2 = A2 ∗ v3 = e2, and v1 = A2 ∗ v2 = e1. So in this case we see J(e3) = {e3, e2, e1} Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 66 / 68 Generalized Eigenvectors Examples Now A2 = A− 2Id =  0 1 00 0 1 0 0 0  , A22 =  0 0 10 0 0 0 0 0  , with A32 = 03×3. We are then looking for a vector v ∈ C3 with A32 ∗ v = 0 (which is automatically the case), but with A22 ∗ v 6= 0. We see that this last condition is satisfied iff the third coordinate of v is non-zero. There is clearly a choice involved. The simplest choice here is to take v = v3 = e3. Then v2 = A2 ∗ v3 = e2, and v1 = A2 ∗ v2 = e1. So in this case we see J(e3) = {e3, e2, e1} Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 66 / 68 Generalized Eigenvectors Examples Now A2 = A− 2Id =  0 1 00 0 1 0 0 0  , A22 =  0 0 10 0 0 0 0 0  , with A32 = 03×3. We are then looking for a vector v ∈ C3 with A32 ∗ v = 0 (which is automatically the case), but with A22 ∗ v 6= 0. We see that this last condition is satisfied iff the third coordinate of v is non-zero. There is clearly a choice involved. The simplest choice here is to take v = v3 = e3. Then v2 = A2 ∗ v3 = e2, and v1 = A2 ∗ v2 = e1. So in this case we see J(e3) = {e3, e2, e1} Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 66 / 68 Generalized Eigenvectors The previous examples were designed to be able to easily find a Jordan chain. The following is a bit more involved. Examples Let A =   1 −1 0−1 4 −1 −4 13 −3  . Then pA(t) = (−t)(1− t)2. The eigenvalue λ = 0 has algebraic multiplicity 1, and therefore cannot be defective. The eigenvalue λ = 1 has ma(1) = 2, while mg (1) = 1. Thus for λ = 0, E g0 (A) = E0(A) = N(A), with basis given by any non-zero vector of the nullspace of A. For λ = 1, we cannot have two linearly independent Jordan chains of length 1, because that would give mg (1) = 2. So we must have a single Jordan chain of length 2. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 67 / 68 Generalized Eigenvectors The previous examples were designed to be able to easily find a Jordan chain. The following is a bit more involved. Examples Let A =   1 −1 0−1 4 −1 −4 13 −3  . Then pA(t) = (−t)(1− t)2. The eigenvalue λ = 0 has algebraic multiplicity 1, and therefore cannot be defective. The eigenvalue λ = 1 has ma(1) = 2, while mg (1) = 1. Thus for λ = 0, E g0 (A) = E0(A) = N(A), with basis given by any non-zero vector of the nullspace of A. For λ = 1, we cannot have two linearly independent Jordan chains of length 1, because that would give mg (1) = 2. So we must have a single Jordan chain of length 2. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 67 / 68 Generalized Eigenvectors The previous examples were designed to be able to easily find a Jordan chain. The following is a bit more involved. Examples Let A =   1 −1 0−1 4 −1 −4 13 −3  . Then pA(t) = (−t)(1− t)2. The eigenvalue λ = 0 has algebraic multiplicity 1, and therefore cannot be defective. The eigenvalue λ = 1 has ma(1) = 2, while mg (1) = 1. Thus for λ = 0, E g0 (A) = E0(A) = N(A), with basis given by any non-zero vector of the nullspace of A. For λ = 1, we cannot have two linearly independent Jordan chains of length 1, because that would give mg (1) = 2. So we must have a single Jordan chain of length 2. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 67 / 68 Generalized Eigenvectors The previous examples were designed to be able to easily find a Jordan chain. The following is a bit more involved. Examples Let A =   1 −1 0−1 4 −1 −4 13 −3  . Then pA(t) = (−t)(1− t)2. The eigenvalue λ = 0 has algebraic multiplicity 1, and therefore cannot be defective. The eigenvalue λ = 1 has ma(1) = 2, while mg (1) = 1. Thus for λ = 0, E g0 (A) = E0(A) = N(A), with basis given by any non-zero vector of the nullspace of A. For λ = 1, we cannot have two linearly independent Jordan chains of length 1, because that would give mg (1) = 2. So we must have a single Jordan chain of length 2. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 67 / 68 Generalized Eigenvectors The previous examples were designed to be able to easily find a Jordan chain. The following is a bit more involved. Examples Let A =   1 −1 0−1 4 −1 −4 13 −3  . Then pA(t) = (−t)(1− t)2. The eigenvalue λ = 0 has algebraic multiplicity 1, and therefore cannot be defective. The eigenvalue λ = 1 has ma(1) = 2, while mg (1) = 1. Thus for λ = 0, E g0 (A) = E0(A) = N(A), with basis given by any non-zero vector of the nullspace of A. For λ = 1, we cannot have two linearly independent Jordan chains of length 1, because that would give mg (1) = 2. So we must have a single Jordan chain of length 2. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 67 / 68 Generalized Eigenvectors The previous examples were designed to be able to easily find a Jordan chain. The following is a bit more involved. Examples Let A =   1 −1 0−1 4 −1 −4 13 −3  . Then pA(t) = (−t)(1− t)2. The eigenvalue λ = 0 has algebraic multiplicity 1, and therefore cannot be defective. The eigenvalue λ = 1 has ma(1) = 2, while mg (1) = 1. Thus for λ = 0, E g0 (A) = E0(A) = N(A), with basis given by any non-zero vector of the nullspace of A. For λ = 1, we cannot have two linearly independent Jordan chains of length 1, because that would give mg (1) = 2. So we must have a single Jordan chain of length 2. Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 67 / 68 Generalized Eigenvectors Examples Now A21 =  1 −3 11 −3 1 3 −9 3  . Inspection shows the vector v2 =  11 2   is in N(A21) but not in N(A1). Letting v1 = A1 ∗ v2 =  −10 1   yields a Jordan chain of length 2: J(v2) = {v2, v1} which is then a basis for E g1 (A). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 68 / 68 Generalized Eigenvectors Examples Now A21 =  1 −3 11 −3 1 3 −9 3  . Inspection shows the vector v2 =  11 2   is in N(A21) but not in N(A1). Letting v1 = A1 ∗ v2 =  −10 1   yields a Jordan chain of length 2: J(v2) = {v2, v1} which is then a basis for E g1 (A). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 68 / 68 Generalized Eigenvectors Examples Now A21 =  1 −3 11 −3 1 3 −9 3  . Inspection shows the vector v2 =  11 2   is in N(A21) but not in N(A1). Letting v1 = A1 ∗ v2 =  −10 1   yields a Jordan chain of length 2: J(v2) = {v2, v1} which is then a basis for E g1 (A). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 68 / 68 Generalized Eigenvectors Examples Now A21 =  1 −3 11 −3 1 3 −9 3  . Inspection shows the vector v2 =  11 2   is in N(A21) but not in N(A1). Letting v1 = A1 ∗ v2 =  −10 1   yields a Jordan chain of length 2: J(v2) = {v2, v1} which is then a basis for E g1 (A). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 68 / 68 Generalized Eigenvectors Examples Now A21 =  1 −3 11 −3 1 3 −9 3  . Inspection shows the vector v2 =  11 2   is in N(A21) but not in N(A1). Letting v1 = A1 ∗ v2 =  −10 1   yields a Jordan chain of length 2: J(v2) = {v2, v1} which is then a basis for E g1 (A). Linear Algebra Module III: Eigenvalues and EigenvectorsSummer Term 2021 68 / 68 The determinant The Determinant - Overview Cofactor Expansion Combinatorial Formulation Properties of the Determinant Eigenvalues and Eigenvectors Eigenvalues and eigenvectors The Characteristic Polynomial Direct Sum Decomposition Similarity and Diagonalization Complex Eigenvalues and Eigenvectors Multiplicity Shur's Theorem and Normal Matrices Generalized Eigenvectors