Sections 8.1-8.2
Week 5
Ali Mousavidehshikh
Department of Mathematics University of Toronto
Ali Mousavidehshikh
Week 5
Outline
1 Sections 8.1-8.2
Sections 8.1-8.2
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Two vectors u and v in Rn are said to be orthogonal if and only if u · v = 0.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Two vectors u and v in Rn are said to be orthogonal if and only if u · v = 0.
A set of non-zero vectors {u1, . . . , un} in Rn is called an orthogonalsetifandonlyifui ·ui =0foralli̸=j.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Two vectors u and v in Rn are said to be orthogonal if and only if u · v = 0.
A set of non-zero vectors {u1, . . . , un} in Rn is called an orthogonalsetifandonlyifui ·ui =0foralli̸=j.
Every orthogonal set is linearly independent (MAT223).
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Two vectors u and v in Rn are said to be orthogonal if and only if u · v = 0.
A set of non-zero vectors {u1, . . . , un} in Rn is called an orthogonalsetifandonlyifui ·ui =0foralli̸=j.
Every orthogonal set is linearly independent (MAT223). Lemma: Let {f1, . . . , fm} be an orthogonal set in Rn. Given
x∈Rn,write(where∥fi ∥2=fi ·fi)
fm+1=x−m x·fi fi. i=1 ∥fi ∥2
Then,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Two vectors u and v in Rn are said to be orthogonal if and only if u · v = 0.
A set of non-zero vectors {u1, . . . , un} in Rn is called an orthogonalsetifandonlyifui ·ui =0foralli̸=j.
Every orthogonal set is linearly independent (MAT223). Lemma: Let {f1, . . . , fm} be an orthogonal set in Rn. Given
x∈Rn,write(where∥fi ∥2=fi ·fi) fm+1=x−m x·fi fi.
i=1 ∥fi ∥2 (1)fm+1·fk =0fork=1,…,m,
Then,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Two vectors u and v in Rn are said to be orthogonal if and only if u · v = 0.
A set of non-zero vectors {u1, . . . , un} in Rn is called an orthogonalsetifandonlyifui ·ui =0foralli̸=j.
Every orthogonal set is linearly independent (MAT223). Lemma: Let {f1, . . . , fm} be an orthogonal set in Rn. Given
x∈Rn,write(where∥fi ∥2=fi ·fi) fm+1=x−m x·fi fi.
Then,
(1)fm+1·fk =0fork=1,…,m,
(2) If x ∈/ span{f1,…,fm}, then fm+1 ̸= 0 and {f1, . . . , fm, fm+1} is an orthogonal set.
i=1 ∥fi ∥2
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
(2) U has an orthogonal basis.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
(2) U has an orthogonal basis.
Theorem (Gram-Schmidt Orthogonalization Algorithm):
If {x1, . . . , xm} is any basis of a subspace U of Rn, construct f1, . . . , fm in U successively as follows:
f1 = x1,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
(2) U has an orthogonal basis.
Theorem (Gram-Schmidt Orthogonalization Algorithm):
If {x1, . . . , xm} is any basis of a subspace U of Rn, construct f1, . . . , fm in U successively as follows:
f1 =x1,f2 =x2 −((x2 ·f1)/∥f1 ∥2)f1,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
(2) U has an orthogonal basis.
Theorem (Gram-Schmidt Orthogonalization Algorithm):
If {x1, . . . , xm} is any basis of a subspace U of Rn, construct f1, . . . , fm in U successively as follows:
f1 =x1,f2 =x2 −((x2 ·f1)/∥f1 ∥2)f1,
f3 =x3 −((x3 ·f1)/∥f1 ∥2)f1 −((x3 ·f2)/∥f2 ∥2)f2,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
(2) U has an orthogonal basis.
Theorem (Gram-Schmidt Orthogonalization Algorithm):
If {x1, . . . , xm} is any basis of a subspace U of Rn, construct f1, . . . , fm in U successively as follows:
f1 =x1,f2 =x2 −((x2 ·f1)/∥f1 ∥2)f1,
f3 =x3 −((x3 ·f1)/∥f1 ∥2)f1 −((x3 ·f2)/∥f2 ∥2)f2,…,
fk =xk −k−1((xk ·fi)/∥fi ∥2)fi foreachk=2,3,…,m. i=1
Then,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
(2) U has an orthogonal basis.
Theorem (Gram-Schmidt Orthogonalization Algorithm):
If {x1, . . . , xm} is any basis of a subspace U of Rn, construct f1, . . . , fm in U successively as follows:
f1 =x1,f2 =x2 −((x2 ·f1)/∥f1 ∥2)f1,
f3 =x3 −((x3 ·f1)/∥f1 ∥2)f1 −((x3 ·f2)/∥f2 ∥2)f2,…,
fk =xk −k−1((xk ·fi)/∥fi ∥2)fi foreachk=2,3,…,m. i=1
Then,
(1) {f1, . . . , fm} is an orthogonal basis of U.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Theorem: Let U be a subspace of Rn.
(1) Every orthogonal subset {f1, . . . , fm} in U is a subset of an orthogonal basis of U.
(2) U has an orthogonal basis.
Theorem (Gram-Schmidt Orthogonalization Algorithm):
If {x1, . . . , xm} is any basis of a subspace U of Rn, construct f1, . . . , fm in U successively as follows:
f1 =x1,f2 =x2 −((x2 ·f1)/∥f1 ∥2)f1,
f3 =x3 −((x3 ·f1)/∥f1 ∥2)f1 −((x3 ·f2)/∥f2 ∥2)f2,…,
fk =xk −k−1((xk ·fi)/∥fi ∥2)fi foreachk=2,3,…,m. i=1
Then,
(1) {f1, . . . , fm} is an orthogonal basis of U.
(2) span{f1,…,fk} = span{x1,…,xk} for each k = 1, . . . , m.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Find an orthogonal basis for the row space of 1 1 −1 −1
3 2 0 1.
1010
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Find an orthogonal basis for the row space of 1 1 −1 −1
3 2 0 1.
1010
The orthogonal complement of a subspace U of Rn is
U⊥ = {x ∈ Rn : x · y = 0 for all y ∈ U}.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Find an orthogonal basis for the row space of 1 1 −1 −1
3 2 0 1.
1010
The orthogonal complement of a subspace U of Rn is
U⊥ = {x ∈ Rn : x · y = 0 for all y ∈ U}. Lemma: (1) U⊥ is a subspace of Rn,
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Find an orthogonal basis for the row space of 1 1 −1 −1
3 2 0 1.
1010
The orthogonal complement of a subspace U of Rn is
U⊥ = {x ∈ Rn : x · y = 0 for all y ∈ U}. Lemma: (1) U⊥ is a subspace of Rn,
(2) {0}⊥ = Rn and (Rn)⊥ = {0},
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Find an orthogonal basis for the row space of 1 1 −1 −1
3 2 0 1.
1010
The orthogonal complement of a subspace U of Rn is
U⊥ = {x ∈ Rn : x · y = 0 for all y ∈ U}.
Lemma: (1) U⊥ is a subspace of Rn,
(2) {0}⊥ = Rn and (Rn)⊥ = {0},
(3) If U = span{x1,x2,…,xk}, then
U⊥ = {x ∈ Rn : x · xi = 0 for all i = 1, 2, . . . , k}.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Find an orthogonal basis for the row space of 1 1 −1 −1
3 2 0 1.
1010
The orthogonal complement of a subspace U of Rn is
U⊥ = {x ∈ Rn : x · y = 0 for all y ∈ U}.
Lemma: (1) U⊥ is a subspace of Rn,
(2) {0}⊥ = Rn and (Rn)⊥ = {0},
(3) If U = span{x1,x2,…,xk}, then
U⊥ = {x ∈ Rn : x · xi = 0 for all i = 1, 2, . . . , k}.
Let U = span{(1, −1, 2, 0), (1, 0, −2, 3)} in R4. Find U⊥.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Let U be a subspace of Rn with orthogonal basis {f1,f2,…,fm}. If x ∈ Rn, the vector
projU(x)=m x·fi i=1 ∥fi ∥2
is called the orthogonal projection of x on U (draw a picture). For the zero subspace U = {0}, we define projU(x) = 0.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Let U be a subspace of Rn with orthogonal basis {f1,f2,…,fm}. If x ∈ Rn, the vector
projU(x)=m x·fi i=1 ∥fi ∥2
is called the orthogonal projection of x on U (draw a picture). For the zero subspace U = {0}, we define projU(x) = 0.
Projection Theorem: If U is a subspace of Rn and x ∈ Rn, write p = projU(x). Then
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Let U be a subspace of Rn with orthogonal basis {f1,f2,…,fm}. If x ∈ Rn, the vector
projU(x)=m x·fi i=1 ∥fi ∥2
is called the orthogonal projection of x on U (draw a picture). For the zero subspace U = {0}, we define projU(x) = 0.
Projection Theorem: If U is a subspace of Rn and x ∈ Rn, write p = projU(x). Then
(1) p ∈ U and x − p ∈ U⊥.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Let U be a subspace of Rn with orthogonal basis {f1,f2,…,fm}. If x ∈ Rn, the vector
projU(x)=m x·fi i=1 ∥fi ∥2
is called the orthogonal projection of x on U (draw a picture). For the zero subspace U = {0}, we define projU(x) = 0.
Projection Theorem: If U is a subspace of Rn and x ∈ Rn, write p = projU(x). Then
(1) p ∈ U and x − p ∈ U⊥.
(2) p is a vector in U closest to x in the sense that
∥ x − p ∥<∥ x − y ∥ for all y ∈ U and y ̸= p.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Solution. Notice that {x1,x2} is an independent set but not orthogonal.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Solution. Notice that {x1,x2} is an independent set but not orthogonal. We use Gram-Schmidt process to find an orthogonal basis {f1, f2}, where f1 = x1 = (1, 1, 0, 1) and
f2 = (−1, 0, 1, 1).
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Solution. Notice that {x1,x2} is an independent set but not orthogonal. We use Gram-Schmidt process to find an orthogonal basis {f1, f2}, where f1 = x1 = (1, 1, 0, 1) and
f2 = (−1, 0, 1, 1). Hence,
p = projU (x ) = (4/3)f1 + (−1/3)f2 = (1/3)(5, 4, −1, 3).
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Solution. Notice that {x1,x2} is an independent set but not orthogonal. We use Gram-Schmidt process to find an orthogonal basis {f1, f2}, where f1 = x1 = (1, 1, 0, 1) and
f2 = (−1, 0, 1, 1). Hence,
p = projU (x ) = (4/3)f1 + (−1/3)f2 = (1/3)(5, 4, −1, 3). So p is the closest point to x in U and x − p = (1/3)(4,−7,1,3) (which is in U⊥), and x = p + (x − p).
Theorem: Let U be a subspace of Rn. If we define
T : Rn → Rn via T(x) = projU(x) for all x ∈ Rn, then
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Solution. Notice that {x1,x2} is an independent set but not orthogonal. We use Gram-Schmidt process to find an orthogonal basis {f1, f2}, where f1 = x1 = (1, 1, 0, 1) and
f2 = (−1, 0, 1, 1). Hence,
p = projU (x ) = (4/3)f1 + (−1/3)f2 = (1/3)(5, 4, −1, 3). So p is the closest point to x in U and x − p = (1/3)(4,−7,1,3) (which is in U⊥), and x = p + (x − p).
Theorem: Let U be a subspace of Rn. If we define
T : Rn → Rn via T(x) = projU(x) for all x ∈ Rn, then (1) T is a linear transformation,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Solution. Notice that {x1,x2} is an independent set but not orthogonal. We use Gram-Schmidt process to find an orthogonal basis {f1, f2}, where f1 = x1 = (1, 1, 0, 1) and
f2 = (−1, 0, 1, 1). Hence,
p = projU (x ) = (4/3)f1 + (−1/3)f2 = (1/3)(5, 4, −1, 3). So p is the closest point to x in U and x − p = (1/3)(4,−7,1,3) (which is in U⊥), and x = p + (x − p).
Theorem: Let U be a subspace of Rn. If we define
T : Rn → Rn via T(x) = projU(x) for all x ∈ Rn, then (1) T is a linear transformation,
(2) im(T) = U and kerT = U⊥,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Let U = span{x1, x2}, where x1 = (1, 1, 0, 1) and
x2 = (0,1,1,2). If x = (3,−1,0,2), find the vector in U closest to x and express x as the sum of vector in U and a vector orthogonal to U.
Solution. Notice that {x1,x2} is an independent set but not orthogonal. We use Gram-Schmidt process to find an orthogonal basis {f1, f2}, where f1 = x1 = (1, 1, 0, 1) and
f2 = (−1, 0, 1, 1). Hence,
p = projU (x ) = (4/3)f1 + (−1/3)f2 = (1/3)(5, 4, −1, 3). So p is the closest point to x in U and x − p = (1/3)(4,−7,1,3) (which is in U⊥), and x = p + (x − p).
Theorem: Let U be a subspace of Rn. If we define
T : Rn → Rn via T(x) = projU(x) for all x ∈ Rn, then (1) T is a linear transformation,
(2) im(T) = U and kerT = U⊥,
(3)dimU+dimU⊥ =n.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Theorem: The following condition are equivalent for an n × n matrix P.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: The following condition are equivalent for an n × n matrix P.
(1) P is invertible and P−1 = PT ,
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: The following condition are equivalent for an n × n matrix P.
(1) P is invertible and P−1 = PT ,
(2) The rows of P are orthonormal,
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: The following condition are equivalent for an n × n matrix P.
(1) P is invertible and P−1 = PT ,
(2) The rows of P are orthonormal,
(3) The columns of P are orthonormal.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: The following condition are equivalent for an n × n matrix P.
(1) P is invertible and P−1 = PT ,
(2) The rows of P are orthonormal,
(3) The columns of P are orthonormal.
Definition: An n × n matrix P is called an orthogonal matrix if it satisfies one (and hence all) of the conditions of the previous theorem.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: The following condition are equivalent for an n × n matrix P.
(1) P is invertible and P−1 = PT ,
(2) The rows of P are orthonormal,
(3) The columns of P are orthonormal.
Definition: An n × n matrix P is called an orthogonal matrix if it satisfies one (and hence all) of the conditions of the previous theorem.
cos θ − sin θ
The matrix sin θ cos θ is an orthogonal matrix.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Theorem: The following condition are equivalent for an n × n matrix P.
(1) P is invertible and P−1 = PT ,
(2) The rows of P are orthonormal,
(3) The columns of P are orthonormal.
Definition: An n × n matrix P is called an orthogonal matrix if it satisfies one (and hence all) of the conditions of the previous theorem.
cos θ − sin θ
The matrix sin θ cos θ is an orthogonal matrix.
If P and Q are orthogonal matrices, so is PQ and P−1.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Definition: An n × n matrix A is said to be orthogonally diagonalizable when an orthogonal matrix P can be found such that D = P−1AP = PT AP, where D is a diagonal matrix.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Definition: An n × n matrix A is said to be orthogonally diagonalizable when an orthogonal matrix P can be found such that D = P−1AP = PT AP, where D is a diagonal matrix.
Theorem: Principal Axis Theorem: The following conditions are equivalent for an n × n matrix A.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Definition: An n × n matrix A is said to be orthogonally diagonalizable when an orthogonal matrix P can be found such that D = P−1AP = PT AP, where D is a diagonal matrix.
Theorem: Principal Axis Theorem: The following conditions are equivalent for an n × n matrix A.
(1) A has an orthonormal set of n eigenvectors.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Definition: An n × n matrix A is said to be orthogonally diagonalizable when an orthogonal matrix P can be found such that D = P−1AP = PT AP, where D is a diagonal matrix.
Theorem: Principal Axis Theorem: The following conditions are equivalent for an n × n matrix A.
(1) A has an orthonormal set of n eigenvectors.
(2) A is orthogonally diagonalizable.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Definition: An n × n matrix A is said to be orthogonally diagonalizable when an orthogonal matrix P can be found such that D = P−1AP = PT AP, where D is a diagonal matrix.
Theorem: Principal Axis Theorem: The following conditions are equivalent for an n × n matrix A.
(1) A has an orthonormal set of n eigenvectors.
(2) A is orthogonally diagonalizable.
(3) A is symmetric.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Find an orthogonal matrix P such that P−1AP is diagonal, 1 0 1
whereA=0 1 2. −1 2 5
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Find an orthogonal matrix P such that P−1AP is diagonal, 1 0 1
whereA=0 1 2. −1 2 5
Solution. The characteristic polynomial of A is CA(x) = det(xI −A) = x(x −1)(x −6).
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Find an orthogonal matrix P such that P−1AP is diagonal, 1 0 1
whereA=0 1 2. −1 2 5
Solution. The characteristic polynomial of A is
CA(x) = det(xI − A) = x(x − 1)(x − 6). Thus the eigenvalues
are λ = 0, 1, and 6, and corresponding eigenvectors are 1 2 −1
x1 = −2, x2 = 1, x3 = 2 , respectively. 105
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Find an orthogonal matrix P such that P−1AP is diagonal, 1 0 1
whereA=0 1 2. −1 2 5
Solution. The characteristic polynomial of A is
CA(x) = det(xI − A) = x(x − 1)(x − 6). Thus the eigenvalues
are λ = 0, 1, and 6, and corresponding eigenvectors are 1 2 −1
x1 = −2, x2 = 1, x3 = 2 , respectively. Notice 105
that these vectors are orthogonal (this is not always the case for any matrix but for symmetric matrices it is the case. Note: they will always be independent for any matrix since they correspond to different eigenvalues).
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Find an orthogonal matrix P such that P−1AP is diagonal, 1 0 1
whereA=0 1 2. −1 2 5
Solution. The characteristic polynomial of A is
CA(x) = det(xI − A) = x(x − 1)(x − 6). Thus the eigenvalues
are λ = 0, 1, and 6, and corresponding eigenvectors are 1 2 −1
x1 = −2, x2 = 1, x3 = 2 , respectively. Notice 105
that these vectors are orthogonal (this is not always the case for any matrix but for symmetric matrices it is the case. Note: they will always be independent for any matrix since they correspond to different eigenvalues). Moreover,
∥x1 ∥2=6, ∥x2 ∥2=5,and∥x3 ∥2=30,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Find an orthogonal matrix P such that P−1AP is diagonal, 1 0 1
whereA=0 1 2. −1 2 5
Solution. The characteristic polynomial of A is
CA(x) = det(xI − A) = x(x − 1)(x − 6). Thus the eigenvalues
are λ = 0, 1, and 6, and corresponding eigenvectors are 1 2 −1
x1 = −2, x2 = 1, x3 = 2 , respectively. Notice 105
that these vectors are orthogonal (this is not always the case for any matrix but for symmetric matrices it is the case. Note: they will always be independent for any matrix since they correspond to different eigenvalues). Moreover,
∥x1 ∥2=6, ∥x2 ∥2=5,and∥x3 ∥2=30,so √√√
P = [x1/ 6, x2/ 5, x3/ 30] is an orthogonal matrix.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Theorem: If A is an n × n symmetric matrix, then (Ax) · y = x · (Ay).
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Theorem: If A is an n × n symmetric matrix, then
(Ax) · y = x · (Ay).
Proof . (Ax) · y = (Ax)T y = xT AT y = xT Ay = x · (Ay).
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Theorem: If A is an n × n symmetric matrix, then
(Ax) · y = x · (Ay).
Proof . (Ax) · y = (Ax)T y = xT AT y = xT Ay = x · (Ay).
Theorem: If A is a symmetric matrix, then eigenvectors corresponding to distinct eigenvalues are orthogonal.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Theorem: If A is an n × n symmetric matrix, then
(Ax) · y = x · (Ay).
Proof . (Ax) · y = (Ax)T y = xT AT y = xT Ay = x · (Ay).
Theorem: If A is a symmetric matrix, then eigenvectors corresponding to distinct eigenvalues are orthogonal. Proof . Let Ax = λx and By = μy, where λ ̸= μ.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Theorem: If A is an n × n symmetric matrix, then
(Ax) · y = x · (Ay).
Proof . (Ax) · y = (Ax)T y = xT AT y = xT Ay = x · (Ay).
Theorem: If A is a symmetric matrix, then eigenvectors corresponding to distinct eigenvalues are orthogonal. Proof . Let Ax = λx and By = μy, where λ ̸= μ. Then, λ(x·y)=(λx)·y =(Ax)·y =x·(Ay)=x·(μy)= μ(x · y).
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Theorem: If A is an n × n symmetric matrix, then
(Ax) · y = x · (Ay).
Proof . (Ax) · y = (Ax)T y = xT AT y = xT Ay = x · (Ay).
Theorem: If A is a symmetric matrix, then eigenvectors corresponding to distinct eigenvalues are orthogonal. Proof . Let Ax = λx and By = μy, where λ ̸= μ. Then, λ(x·y)=(λx)·y =(Ax)·y =x·(Ay)=x·(μy)= μ(x ·y). Hence, (λ−μ)(x ·y) = 0,
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
Thus P−1 = PT and P−1AP = D where D is the diagonal matrix with entries D11 = 0, D22 = 1, and D33 = 6 (the eigenvalues).
Theorem: If A is an n × n symmetric matrix, then
(Ax) · y = x · (Ay).
Proof . (Ax) · y = (Ax)T y = xT AT y = xT Ay = x · (Ay).
Theorem: If A is a symmetric matrix, then eigenvectors corresponding to distinct eigenvalues are orthogonal. Proof . Let Ax = λx and By = μy, where λ ̸= μ. Then, λ(x·y)=(λx)·y =(Ax)·y =x·(Ay)=x·(μy)= μ(x · y). Hence, (λ − μ)(x · y) = 0, and so x · y = 0.
Ali Mousavidehshikh
Week 5
Sections 8.1-8.2
(Theorem) If A is an n × n matrix with n real eigenvalues, there exists an orthogonal matrix P such that PT AP is upper triangular.
Week 5
Ali Mousavidehshikh
Sections 8.1-8.2
(Theorem) If A is an n × n matrix with n real eigenvalues, there exists an orthogonal matrix P such that PT AP is upper triangular.
(Corollary) If A is an n × n matrix with real eigenvalues n
λ1, λ2, . . . , λn (possibly not distinct), then det A = λi and
n
tr A = λi .
k=1
k=1
Ali Mousavidehshikh
Week 5