程序代写代做代考 ECS130 Scientific Computation Handout D Feburary 10, 2017

ECS130 Scientific Computation Handout D Feburary 10, 2017

1. Let A ∈ Cn×n.

(a) A scalar λ is an eigenvalue of an n × n A and a nonzero vector x ∈ Cn is a corresponding
(right) eigenvector if

Ax = λx.

(b) A nonzero vector y is called a left eigenvector if

yHA = λyH .

(c) The set of all eigenvalues of A, denoted as λ(A), is called the spectrum of A.

(d) The characteristic polynomial of A is a polynomial of degree n, and defined as

p(λ) = det(λI −A).

2. The following is a list of properties straightforwardly from above definitions:

(a) λ is A’s eigenvalue ⇔ λI −A is singular ⇔ det(λI −A) = 0 ⇔ p(λ) = 0.
(b) There is at least one eigenvector x associated with A’s eigenvalue λ.

(c) Suppose A is real. λ is A’s eigenvalue ⇔ conjugate λ̄ is also A’s eigenvalue.
(d) A is singular ⇔ 0 is A’s eigenvalue.
(e) If A is upper (or lower) triangular, then its eigenvalues consist of its diagonal entries.

(Question: what if A is a block upper (or lower) triangular matrix ?).

3. Schur decomposition.

Let A be of order n. Then there is an n×n unitary matrix U (i.e., UHU = I) such that

A = UTUH ,

where T is upper triangular and the diagonal elements of T are the eigenvalues of A.

By appropriate choice of U , the eigenvalues of A, which are the diagonal elements of T , may be
made to appear in any order.

4. A ∈ Cn×n is simple if it has n linearly independent eigenvectors; otherwise it is defective.
Examples.

(a) I and any diagonal matrices is simple. e1, e2, . . . , en are n linearly independent eigenvectors.

(b) A =

(
1 2
4 3

)
is simple. It has two different eigenvalues −1 and 5, it has 2 linearly independent

eigenvectors:
1

2

(
−1

1

)
and

1

5

(
1
2

)
.

(c) If A ∈ Cn×n has n different eigenvalues, then A is simple.

(d) A =

(
2 1
0 2

)
is defective. It has two repeated eigenvalues 2, but only one eigenvector e1 = (1, 0)

T .

1

5. Eigenvalue decomposition

A ∈ Cn×n is simple if and only if there exisits a nonsingular matrix X ∈ Cn×n such
that

A = XΛX−1,

where Λ = diag(λ1, λ2, . . . , λn). In this case, λi are eigenvalues, and the columns of X
are eigenvectors, and A is called diagonalizable).

6. An invariant subspace of A is a subspace V of Rn, with the property that

v ∈ V implies that Av ∈ V.

We also write this as AV ⊆ V.
Examples.

(a) The simplest, one-dimensional invariant subspace is the set span(x) of all scalar multiples of
an eigenvector x.

(b) Let x1, x2, . . . , xm be any set of independent eigenvectors with eigenvalues λ1, λ2, . . . , λm. Then
X = span({x1, x2, . . . , xm}) is an invariant subspace.

Proposition. Let A be n-by-n, let V = [v1, v2, . . . , vm] be any n-by-m matrix with
linearly independent columns, and let V = span(V ), the m-dimensional space spanned
by the columns of V . Then V is an invariant subspace if and only if there is an m-by-m
matrix B such that

AV = V B.

In this case, the m eigenvalues of B are also eigenvalues of A.

7. Two n × n matrices A and B are similar if there is an n × n non-singular matrix P such that
B = P−1AP . We also say A is similar to B, and likewise B is similar to A; P is a similarity
transformation. A is unitarily similar to B if P is unitary.

Proposition. Suppose that A and B are similar: B = P−1AP .

(a) A and B have the same eigenvalues. In fact pA(λ) ≡ pB(λ).
(b) Ax = λx⇒ B(P−1x) = λ(P−1x).
(c) Bw = λw ⇒ A(Pw) = λ(Pw).

2