Linear Algebra in Twenty Five Lectures
Tom Denton and Andrew Waldron March 27, 2012
Edited by Katrina Glaeser, Rohit Thomas & Travis Scrimshaw
1
Contents
1 What is Linear Algebra? 12
2 Gaussian Elimination 19 2.1 NotationforLinearSystems ………………. 19 2.2 ReducedRowEchelonForm ………………. 21
3 Elementary Row Operations 27
4 Solution Sets for Systems of Linear Equations 34
4.1 Non-LeadingVariables …………………. 35
5 Vectors in Space, n-Vectors 43
5.1 DirectionsandMagnitudes……………….. 46
6 Vector Spaces 53
7 Linear Transformations 58
8 Matrices 63
9 Properties of Matrices 72 9.1 BlockMatrices …………………….. 72 9.2 TheAlgebraofSquareMatrices ……………. 73
10 Inverse Matrix 79 10.1ThreePropertiesoftheInverse …………….. 80 10.2FindingInverses…………………….. 81 10.3LinearSystemsandInverses………………. 82 10.4HomogeneousSystems …………………. 83 10.5BitMatrices………………………. 84
11 LU Decomposition 88 11.1 Using LU Decomposition to Solve Linear Systems . . . . . . . 89 11.2FindinganLUDecomposition. …………….. 90 11.3BlockLDUDecomposition……………….. 94
2
12 Elementary Matrices and Determinants 96 12.1Permutations ……………………… 97 12.2ElementaryMatrices …………………..100
13 Elementary Matrices and Determinants II
14 Properties of the Determinant
14.1 Determinant of the Inverse . . . . . . . . . 14.2 Adjoint of a Matrix . . . . . . . . . . . . . 14.3 Application: VolumeofaParallelepiped .
15 Subspaces and Spanning Sets
107
116
. . . . . . . . . . . 119 . . . . . . . . . . . 120 . . . . . . . . . . . 122
124
15.1Subspaces ………………………..124 15.2BuildingSubspaces ……………………126
16 Linear Independence 131
17 Basis and Dimension 139
17.1 Bases in Rn. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
18 Eigenvalues and Eigenvectors 147 18.1 MatrixofaLinearTransformation . . . . . . . . . . . . . . .147 18.2 Invariant Directions . . . . . . . . . . . . . . . . . . . . . . . . 151
19 Eigenvalues and Eigenvectors II 159 19.1Eigenspaces ……………………….162
20 Diagonalization 165 20.1Diagonalization ……………………..165
20.2 Change of Basis . . . . . . . . . . . . . . . . .
21 Orthonormal Bases
21.1 Relating Orthonormal Bases . . . . . . . . . .
22 Gram-Schmidt and Orthogonal Complements
. . . . . . . . . 166
173
. . . . . . . . . 176
181
22.1OrthogonalComplements ………………..185 23 Diagonalizing Symmetric Matrices 191
3
24 Kernel, Range, Nullity, Rank 197 24.1Summary ………………………..201
25 Least Squares 206
A Sample Midterm I Problems and Solutions 211
B Sample Midterm II Problems and Solutions 221
C Sample Final Problems and Solutions 231
D Points Vs. Vectors 256
E Abstract Concepts 258 E.1 DualSpaces……………………….258 E.2 Groups………………………….258 E.3 Fields ………………………….259 E.4 Rings…………………………..260 E.5 Algebras…………………………261
F Sine and Cosine as an Orthonormal Basis 262
G Movie Scripts 264
G.1 IntroductoryVideo ……………………264
G.2 WhatisLinearAlgebra:Overview ……………265
G.3 What is Linear Algebra: 3×3 Matrix Example . . . . . . . . 267
G.4 WhatisLinearAlgebra:Hint ………………268
G.5 Gaussian Elimination: Augmented Matrix Notation . . . . . . 269
G.6 Gaussian Elimination: Equivalence of Augmented Matrices . . 270
G.7 Gaussian Elimination: Hints for Review Questions 4 and 5 . . 271
G.8 GaussianElimination:3×3Example…………..273
G.9 ElementaryRowOperations: Example . . . . . . . . . . . .
G.10 Elementary Row Operations: Worked Examples . . . . . . .
G.11 Elementary Row Operations: Explanation of Proof for Theo-
rem 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
G.12 Elementary Row Operations: Hint for Review Question 3 .
G.13 Solution Sets for Systems of Linear Equations: Planes . . . .
G.14 Solution Sets for Systems of Linear Equations: Pictures and
Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
. 274 . 277
. 279 . 281 . 282
. 283
G.15 Solution Sets for Systems of Linear Equations: Example . . . 285 G.16 Solution Sets for Systems of Linear Equations: Hint . . . . . . 287 G.17VectorsinSpace,n-Vectors:Overview ………….288 G.18 Vectors in Space, n-Vectors: Review of Parametric Notation . 289 G.19 Vectors in Space, n-Vectors: The Story of Your Life . . . . . . 291 G.20VectorSpaces:ExamplesofEachRule ………….292 G.21VectorSpaces: ExampleofaVectorSpace . . . . . . . . . . . 296
G.22 Vector Spaces: Hint . . . . . . . . . . . . . . . . . . . . . . . . 297
G.23 Linear Transformations: A Linear and A Non-Linear Example 298
G.24 Linear Transformations: Derivative and Integral of (Real) Poly-
nomialsofDegreeatMost3 ……………….300
G.25 Linear Transformations: Linear Transformations Hint . . . . . 302
G.26Matrices: AdjacencyMatrixExample . . . . . . . . . . . . . . 304 G.27 Matrices: Do Matrices Commute? . . . . . . . . . . . . . . . . 306 G.28Matrices: HintforReviewQuestion4 . . . . . . . . . . . . . . 307 G.29Matrices: HintforReviewQuestion5 . . . . . . . . . . . . . . 308 G.30 Properties of Matrices: Matrix Exponential Example . . . . . 309 G.31 Properties of Matrices: Explanation of the Proof . . . . . . . . 310 G.32 Properties of Matrices: A Closer Look at the Trace Function . 312 G.33 Properties of Matrices: Matrix Exponent Hint . . . . . . . . . 313 G.34InverseMatrix:A2×2Example …………….315 G.35InverseMatrix: HintsforProblem3. . . . . . . . . . . . . . . 316 G.36 Inverse Matrix: Left and Right Inverses . . . . . . . . . . . . . 317 G.37 LU Decomposition: Example: How to Use LU Decomposition 319
G.38 LU Decomposition: Worked Example . . . . . . . . . . G.39 LU Decomposition: Block LDU Explanation . . . . . .
G.40 Elementary Matrices and Determinants: Permutations
G.41 Elementary Matrices and Determinants: Some Ideas Explained 324
G.42 Elementary Matrices and Determinants: Hints for Problem 4 . 327
G.43 Elementary Matrices and Determinants II: Elementary Deter-
minants …………………………328
G.44 Elementary Matrices and Determinants II: Determinants and
Inverses …………………………330
G.45 Elementary Matrices and Determinants II: Product of Deter-
minants …………………………332
G.46 Properties of the Determinant: Practice taking Determinants 333 G.47 Properties of the Determinant: The Adjoint Matrix . . . . . . 335 G.48 Properties of the Determinant: Hint for Problem 3 . . . . . . 338
5
. . . . 321 . . . . 322 ….323
G.49 Subspaces and Spanning Sets: Worked Example . . . . . . . . 339 G.50 Subspaces and Spanning Sets: Hint for Problem 2 . . . . . . . 340 G.51SubspacesandSpanningSets:Hint ……………342
G.52Linear Independence: Worked Example . . . . . . . . . . . . G.53 Linear Independence: Proof of Theorem 16.1 . . . . . . . . . G.54LinearIndependence: HintforProblem1 . . . . . . . . . . . G.55BasisandDimension: ProofofTheorem . . . . . . . . . . . G.56 Basis and Dimension: Worked Example . . . . . . . . . . . . G.57BasisandDimension: HintforProblem2. . . . . . . . . . . G.58 Eigenvalues and Eigenvectors: Worked Example . . . . . . .
. 343 . 345 . 346 . 347 . 348 . 351 . 352 . 354 . 356 . 358 . 360 . 361 . 362 . 363 . 368
G.59 Eigenvalues and Eigenvectors: 2 × 2 Example G.60 Eigenvalues and Eigenvectors: Jordan Cells . . G.61 Eigenvalues and Eigenvectors II: Eigenvalues . G.62 Eigenvalues and Eigenvectors II: Eigenspaces . G.63EigenvaluesandEigenvectorsII:Hint . . . . .
G.64 Diagonalization: Derivative Is Not Diagonalizable . . . . . .
G.65 Diagonalization: Change of Basis Example . . . . . . . . . .
G.66 Diagonalization: Diagionalizing Example . . . . . . . . . . .
G.67 Orthonormal Bases: Sine and Cosine Form All Orthonormal
BasesforR2 ……………………….370 G.68 Orthonormal Bases: Hint for Question 2, Lecture 21 . . . . . . 371 G.69OrthonormalBases:Hint ………………..372
G.70 Gram-Schmidt and Orthogonal Complements: 4 × 4 Gram
Schmidt Example . . . . . . . . . . . . . . . . . . . . . . . . . 374
G.71 Gram-Schmidt and Orthogonal Complements: Overview . . . 376
G.72 Gram-Schmidt and Orthogonal Complements: QR Decompo-
sitionExample ……………………..378
G.73 Gram-Schmidt and Orthogonal Complements: Hint for Prob-
lem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
G.74 Diagonalizing Symmetric Matrices: 3 × 3 Example . . . . . .
G.75 Diagonalizing Symmetric Matrices: Hints for Problem 1 . . .
G.76 Kernel, Range, Nullity, Rank: Invertibility Conditions . . . .
G.77Kernel,Range,Nullity,Rank: Hintfor 1 . . . . . . . . . . G.78LeastSquares:HintforProblem1 ……………387 G.79LeastSquares:HintforProblem2 ……………388
H Student Contributions
389
6
…….. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 380 . 382 . 384 . 385 . 386
I Other Resources
J List of Symbols
Index
390 392 393
7
Preface
These linear algebra lecture notes are designed to be presented as twenty five, fifty minute lectures suitable for sophomores likely to use the material for applications but still requiring a solid foundation in this fundamental branch of mathematics. The main idea of the course is to emphasize the concepts of vector spaces and linear transformations as mathematical structures that can be used to model the world around us. Once “persuaded” of this truth, students learn explicit skills such as Gaussian elimination and diagonalization in order that vectors and linear transformations become calculational tools, rather than abstract mathematics.
In practical terms, the course aims to produce students who can perform computations with large linear systems while at the same time understand the concepts behind these techniques. Often-times when a problem can be re- duced to one of linear algebra it is “solved”. These notes do not devote much space to applications (there are already a plethora of textbooks with titles involving some permutation of the words “linear”, “algebra” and “applica- tions”). Instead, they attempt to explain the fundamental concepts carefully enough that students will realize for their own selves when the particular application they encounter in future studies is ripe for a solution via linear algebra.
There are relatively few worked examples or illustrations in these notes, this material is instead covered by a series of “linear algebra how-to videos”.
They can be viewed by clicking on the take one icon . The “scripts” for these movies are found at the end of the notes if students prefer to read this material in a traditional format and can be easily reached via the script icon . Watch an introductory video below:
Introductory Video
The notes are designed to be used in conjunction with a set of online homework exercises which help the students read the lecture notes and learn basic linear algebra skills. Interspersed among the lecture notes are links to simple online problems that test whether students are actively reading the notes. In addition there are two sets of sample midterm problems with solutions as well as a sample final exam. There are also a set of ten on- line assignments which are usually collected weekly. The first assignment
8
is designed to ensure familiarity with some basic mathematic notions (sets, functions, logical quantifiers and basic methods of proof). The remaining nine assignments are devoted to the usual matrix and vector gymnastics expected from any sophomore linear algebra class. These exercises are all available at
http://webwork.math.ucdavis.edu/webwork2/MAT22A-Waldron- Winter-2012/
Webwork is an open source, online homework system which originated at the University of Rochester. It can efficiently check whether a student has answered an explicit, typically computation-based, problem correctly. The problem sets chosen to accompany these notes could contribute roughly 20% of a student’s grade, and ensure that basic computational skills are mastered. Most students rapidly realize that it is best to print out the Webwork assign- ments and solve them on paper before entering the answers online. Those who do not tend to fare poorly on midterm examinations. We have found that there tend to be relatively few questions from students in office hours about the Webwork assignments. Instead, by assigning 20% of the grade to written assignments drawn from problems chosen randomly from the re- view exercises at the end of each lecture, the student’s focus was primarily on understanding ideas. They range from simple tests of understanding of the material in the lectures to more difficult problems, all of them require thinking, rather than blind application of mathematical “recipes”. Office hour questions reflected this and offered an excellent chance to give students tips how to present written answers in a way that would convince the person grading their work that they deserved full credit!
Each lecture concludes with references to the comprehensive online text- books of Jim Hefferon and Rob Beezer:
http://joshua.smcvt.edu/linearalgebra/ http://linear.ups.edu/index.html
and the notes are also hyperlinked to Wikipedia where students can rapidly access further details and background material for many of the concepts. Videos of linear algebra lectures are available online from at least two sources:
The Khan Academy, http://www.khanacademy.org/?video#Linear Algebra
9
MIT OpenCourseWare, Professor Gilbert Strang, http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring -2010/video-lectures/
There are also an array of useful commercially available texts. A non- exhaustive list includes
“Introductory Linear Algebra, An Applied First Course”, B. Kolman and D. Hill, Pearson 2001.
“Linear Algebra and Its Applications”, David C. Lay, Addison–Weseley 2011.
“Introduction to Linear Algebra”, Gilbert Strang, Wellesley Cambridge Press 2009.
“Linear Algebra Done Right”, S. Axler, Springer 1997.
“Algebra and Geometry”, D. Holten and J. Lloyd, CBRC, 1978.
“Schaum’s Outline of Linear Algebra”, S. Lipschutz and M. Lipson, McGraw-Hill 2008.
A good strategy is to find your favorite among these in the University Library. There are many, many useful online math resources. A partial list is given
in Appendix I.
Students have also started contributing to these notes. Click here to see
some of their work.
There are many “cartoon” type images for the important theorems and
formalæ . In a classroom with a projector, a useful technique for instructors is to project these using a computer. They provide a colorful relief for students from (often illegible) scribbles on a blackboard. These can be downloaded at:
Lecture Materials
There are still many errors in the notes, as well as awkwardly explained concepts. An army of 400 students, Fu Liu, Stephen Pon and Gerry Puckett have already found many of them. Rohit Thomas has spent a great deal of time editing these notes and the accompanying webworks and has improved them immeasurably. Katrina Glaeser and Travis Scrimshaw have spent many
10
hours shooting and scripting the how-to videos and taken these notes to a whole new level! Anne Schilling shot a great guest video. We also thank Captain Conundrum for providing us his solutions to the sample midterm and final questions. The review exercises would provide a better survey of what linear algebra really is if there were more “applied” questions. We welcome your contributions!
Andrew and Tom.
©2009 by the authors. These lecture notes may be reproduced in their entirety for non-commercial purposes.
11
1 What is Linear Algebra?
Video Overview
Three bears go into a cave, two come out. Would you go in? Brian Butterworth
Numbers are highly useful tools for surviving in the modern world, so much so that we often introduce abstract pronumerals to represent them:
n bears go into a cave, n−1 come out. Would you go in?
A single number alone is not sufficient to model more complicated real world situations. For example, suppose I asked everybody in this room to rate the likeability of everybody else on a scale from 1 to 10. In a room full of n people (or bears sic) there would be n2 ratings to keep track of (how much Jill likes Jill, how much does Jill like Andrew, how much does Andrew like Jill, how much does Andrew like Andrew, etcetera). We could arrange these in a square array
9 4 ··· 106 . …
Would it make sense to replace such an array by an abstract symbol M? In the case of numbers, the pronumeral n was more than a placeholder for a particular piece of information; there exists a myriad of mathematical oper- ations (addition, subtraction, multiplication,…) that can be performed with the symbol n that could provide useful information about the real world sys- tem at hand. The array M is often called a matrix and is an example of a
12
more general abstract structure called a linear transformation on which many mathematical operations can also be defined. (To understand why having an abstract theory of linear transformations might be incredibly useful and even lucrative, try replacing “likeability ratings” with the number of times inter- net websites link to one another!) In this course, we’ll learn about three main topics: Linear Systems, Vector Spaces, and Linear Transformations. Along the way we’ll learn about matrices and how to manipulate them.
For now, we’ll illustrate some of the basic ideas of the course in the case of two by two matrices. Everything will carefully defined later, we just want to start with some simple examples to get an idea of the things we’ll be working with.
Example Suppose I have a bunch of apples and oranges. Let x be the number of apples I have, and y be the number of oranges I have. As everyone knows, apples and oranges don’t mix, so if I want to keep track of the number of apples and oranges I have, I should put them in a list. We’ll call this list a vector, and write it like this: (x,y). The order here matters! I should remember to always write the number of apples first and then the number of oranges – otherwise if I see the vector (1,2), I won’t know whether I have two apples or two oranges.
This vector in the example is just a list of two numbers, so if we want to, we can represent it with a point in the plane with the corresponding coordinates, like so:
13
Oranges
(x,y)
Apples
In the plane, we can imagine each point as some combination of apples and oranges (or parts thereof, for the points that don’t have integer coordinates). Then each point corresponds to some vector. The collection of all such vectors—all the points in our apple-orange plane—is an example of a vector space.
Example There are 27 pieces of fruit in a barrel, and twice as many oranges as apples. How many apples and oranges are in the barrel?
How to solve this conundrum? We can re-write the question mathematically as follows:
x+y = 27 y = 2x
This is an example of a Linear System. It’s a collection of equations in which variables are multiplied by constants and summed, and no variables are multiplied together: There are no powers of x or y greater than one, no fractional or negative powers of x or y, and no places where x and y are multiplied together.
Reading homework: problem 1.1
Notice that we can solve the system by manipulating the equations in- volved. First, notice that the second equation is the same as −2x + y = 0. Then if you subtract the second equation from the first, you get on the left sidex+y−(−2x+y)=3x,andontheleftsideyouget27−0=27. Then
14
3x = 27, so we learn that x = 9. Using the second equation, we then see that y = 18. Then there are 9 apples and 18 oranges.
Let’s do it again, by working with the list of equations as an object in itself. First we rewrite the equations tidily:
x+y = 27 2x−y = 0
We can express this set of equations with a matrix as follows: 1 1x 27
2 −1 y = 0
The square list of numbers is an example of a matrix. We can multiply the matrix by the vector to get back the linear system using the following rule for multiplying matrices by vectors:
a b x ax + by
c d y = cx+dy (1)
Reading homework: problem 1.2
A 3×3 matrix example
The matrix is an example of a Linear Transformation, because it takes one vector and turns it into another in a “linear” way.
Our next task is to solve linear systems. We’ll learn a general method called Gaussian Elimination.
References
Hefferon, Chapter One, Section 1
Beezer, Chapter SLE, Sections WILA and SSLE Wikipedia, Systems of Linear Equations
15
Review Problems
1. Let M be a matrix and u and v vectors: a b x w
M=cd,v=y,u=z. (a) Propose a definition for u + v.
(b) Check that your definition obeys Mv + Mu = M(u + v). 2. Matrix Multiplication: Let M and N be matrices
and v a vector
a b e f
M= c d
andN= g h , x
v=y.
Compute the vector Nv using the rule given above. Now multiply this
vector by the matrix M, i.e., compute the vector M(Nv).
Next recall that multiplication of ordinary numbers is associative, namely the order of brackets does not matter: (xy)z = x(yz). Let us try to demand the same property for matrices and vectors, that is
M(Nv) = (MN)v .
We need to be careful reading this equation because Nv is a vector and so is M(Nv). Therefore the right hand side, (MN)v should also be a vector. This means that MN must be a matrix; in fact it is the matrix obtained by multiplying the matrices M and N. Use your result for M(Nv) to find the matrix MN.
3. Pablo is a nutritionist who knows that oranges always have twice as much sugar as apples. When considering the sugar intake of schoolchil- dren eating a barrel of fruit, he represents the barrel like so:
16
fruit
(s,f)
sugar
Find a linear transformation relating Pablo’s representation to the one in the lecture. Write your answer as a matrix.
Hint: Let λ represent the amount of sugar in each apple.
Hint
4. There are methods for solving linear systems other than Gauss’ method. One often taught in high school is to solve one of the equations for a variable, then substitute the resulting expression into other equations. That step is repeated until there is an equation with only one vari- able. From that, the first number in the solution is derived, and then back-substitution can be done. This method takes longer than Gauss’ method, since it involves more arithmetic operations, and is also more likely to lead to errors. To illustrate how it can lead to wrong conclu- sions, we will use the system
x+3y= 1 2x+ y=−3 2x+2y= 0
(a) Solve the first equation for x and substitute that expression into the second equation. Find the resulting y.
(b) Again solve the first equation for x, but this time substitute that expression into the third equation. Find this y.
17
What extra step must a user of this method take to avoid erroneously concluding a system has a solution?
18
2 Gaussian Elimination
2.1 Notation for Linear Systems
In Lecture 1 we studied the linear system x+y = 27
and found that
2x−y = 0
x=9 y = 18
We learned to write the linear system using a matrix and two vectors like so: 1 1x 27
2 −1 y = 0
Likewise, we can write the solution as:
1 0x 9
0 1 y = 18
The matrix I = 0 1 is called the Identity Matrix. You can check that if
v is any vector, then Iv = v.
A useful shorthand for a linear system is an Augmented Matrix, which
looks like this for the linear system we’ve been dealing with: 1 1 27
2 −1 0 x
We don’t bother writing the vector y , since it will show up in any linear system we deal with. The solution to the linear system looks like this:
1 0 9 0 1 18
19
1 0
Augmented Matrix Notation
Here’s another example of an augmented matrix, for a linear system with three equations and four unknowns:
1 3 2 0 9 6 2 0 −2 0 −1 0 1 1 3
And finally, here’s the general case. The number of equations in the linear system is the number of rows r in the augmented matrix, and the number of columns k in the matrix left of the vertical line is the number of unknowns.
a 1 1 a 12 · · · a 1k b 1 a2 a2 ··· a2 b2
12k . . . .
a r1 a r2 · · · a rk b r
Reading homework: problem 2.1
Here’s the idea: Gaussian Elimination is a set of rules for taking a gen- eral augmented matrix and turning it into a very simple augmented matrix consisting of the identity matrix on the left and a bunch of numbers (the solution) on the right.
Equivalence Relations for Linear Systems
Equivalence Example
It often happens that two mathematical objects will appear to be differ-
ent but in fact are exactly the same. The best-known example of this are
fractions. For example, the fractions 1 and 6 describe the same number. 2 12
We could certainly call the two fractions equivalent.
In our running example, we’ve noticed that the two augmented matrices
20
1 1 27 1 0 9 2 −1 0 , 0 1 18
both contain the same information: x = 9, y = 18.
Two augmented matrices corresponding to linear systems that actually
have solutions are said to be (row) equivalent if they have the same solutions. To denote this, we write:
1 1 27 1 0 9 2 −1 0 ∼ 0 1 18
The symbol ∼ is read “is equivalent to”.
A small excursion into the philosophy of mathematical notation: Suppose
I have a large pile of equivalent fractions, such as 2 , 27 , 100 , and so on. Most 4 54 200
people will agree that their favorite way to write the number represented by all these different factors is 21 , in which the numerator and denominator are relatively prime. We usually call this a reduced fraction. This is an example of a canonical form, which is an extremely impressive way of saying “favorite way of writing it down”. There’s a theorem telling us that every rational number can be specified by a unique fraction whose numerator and denom- inator are relatively prime. To say that again, but slower, every rational number has a reduced fraction, and furthermore, that reduced fraction is unique.
A 3×3 example 2.2 Reduced Row Echelon Form
Since there are many different augmented matrices that have the same set of solutions, we should find a canonical form for writing our augmented matrices. This canonical form is called Reduced Row Echelon Form, or RREF for short. RREF looks like this in general:
21
1 ∗ 0 ∗ 0 ··· 0 b1
0 1 ∗ 0 ··· 0 b2
0 0 1 ··· . . .
0 b3 .
. . .
0. k
0 0 0 ··· …..
. . . .. 0 0 0···00
The first non-zero entry in each row is called the pivot. The asterisks denote arbitrary content which could be several columns long. The following properties describe the RREF.
1. In RREF, the pivot of any row is always 1.
2. The pivot of any given row is always to the right of the pivot of the row above it.
3. The pivot is the only non-zero entry in its column.
1 0 7 0 Example 0 1 3 0
0 0 0 1 0000
Here is a NON-Example, which breaks all three of the rules:
1 0 3 0 0 0 2 0 0 1 0 1 0001
The RREF is a very useful way to write linear systems: it makes it very easy to write down the solutions to the system.
Example
1 0 7 0 4 0 1 3 0 1 0 0 0 1 2 00000
22
1b 0 0
When we write this augmented matrix as a system of linear equations, we get the following:
x +7z =4 y + 3z = 1 w=2
Solving from the bottom variables up, we see that w = 2 immediately. z is not a pivot, so it is still undetermined. Set z = λ. Then y = 1−3λ and x = 4−7λ. More concisely:
x 4 −7 y = 1 + λ −3
z 0 1 w20
So we can read off the solution set directly from the RREF. (Notice that we use the word “set” because there is not just one solution, but one for every choice of λ.)
Reading homework: problem 2.2
You need to become very adept at reading off solutions of linear systems from the RREF of their augmented matrix. The general method is to work from the bottom up and set any non-pivot variables to unknowns. Here is another example.
Example
1 1 0 1 0 1 0 0 1 2 0 2. 0 0 0 0 1 3
000000
Here we were not told the names of the variables, so lets just call them x1, x2, x3, x4, x5. (There are always as many of these as there are columns in the matrix before the ver- tical line; the number of rows, on the other hand is the number of linear equations.)
To begin with we immediately notice that there are no pivots in the second and fourth columns so x2 and x4 are undetermined and we set them to
x2 =λ1, x4 =λ2.
(Note that you get to be creative here, we could have used λ and μ or any other names we like for a pair of unknowns.)
23
Working from the bottom up we see that the last row just says 0 = 0, a well known fact! Note that a row of zeros save for a non-zero entry after the vertical line would be mathematically inconsistent and indicates that the system has NO solutions at all.
Next we see from the second last row that x5 = 3. The second row says x3 =
2−2×4 = 2−2λ2. The top row then gives x1 = 1−x2 −x4 = 1−λ1 −λ2. Again
we can write this solution as a vector
1 −1 −1 0 1 0
2+λ1 0 +λ2 −2 . 0 0 1
300
Observe, that since no variables were given at the beginning, we do not really need to state them in our solution. As a challenge, look carefully at this solution and make sure you can see how every part of it comes from the original augmented matrix without every having to reintroduce variables and equations.
Perhaps unsurprisingly in light of the previous discussions of RREF, we have a theorem:
Theorem 2.1. Every augmented matrix is row-equivalent to a unique aug- mented matrix in reduced row echelon form.
Next lecture, we will prove it.
References
Hefferon, Chapter One, Section 1 Beezer, Chapter SLE, Section RREF Wikipedia, Row Echelon Form
Review Problems
1. State whether the following augmented matrices are in RREF and com- pute their solution sets.
1 0 0 0 3 1 0 1 0 0 1 2, 0 0 1 0 1 3
000120 24
1 1 0 1 0 1 0 0 0 1 2 0 2 0, 0 0 0 0 1 3 0
0000000
1 1 0 1 0 1 0 1 0 0 1 2 0 2 0 −1 0000130 1. 0 0 0 0 0 2 0 − 2
00000011
2. Show that this pair of augmented matrices are row equivalent, assuming
ad − bc ̸= 0:
a b e 1 0 de−bf ∼ ad−bc
c d f 0 1 af−ce ad−bc
2 −1 3
3. Consider the augmented matrix: −6 3 1
Give a geometric reason why the associated system of equations has no solution. (Hint, plot the three vectors given by the columns of this augmented matrix in the plane.) Given a general augmented matrix
a b e cdf,
can you find a condition on the numbers a,b,c and d that create the geometric condition you found?
4. List as many operations on augmented matrices that preserve row equivalence as you can. Explain your answers. Give examples of oper- ations that break row equivalence.
5. Row equivalence of matrices is an example of an equivalence relation. Recall that a relation ∼ on a set of objects U is an equivalence relation if the following three properties are satisfied:
Reflexive: Foranyx∈U,wehavex∼x.
Symmetric: For any x,y ∈ U, if x ∼ y then y ∼ x.
25
Transitive: Foranyx,yandz∈U,ifx∼yandy∼zthenx∼z. (For a fuller discussion of equivalence relations, see Homework 0, Prob-
lem 4)
Show that row equivalence of augmented matrices is an equivalence relation.
Hints for Questions 4 and 5
26
3 Elementary Row Operations
Our goal is to begin with an arbitrary matrix and apply operations that respect row equivalence until we have a matrix in Reduced Row Echelon Form (RREF). The three elementary row operations are:
(Row Swap) Exchange any two rows.
(Scalar Multiplication) Multiply any row by a non-zero constant. (Row Sum) Add a multiple of one row to another row.
Example
Why do these preserve the linear system in question? Swapping rows is just changing the order of the equations begin considered, which certainly should not alter the solutions. Scalar multiplication is just multiplying the equation by the same number on both sides, which does not change the solu- tion(s) of the equation. Likewise, if two equations share a common solution, adding one to the other preserves the solution. Therefore we can define aug- mented matrices to be row equivalent if the are related by a sequence of elementary row operations. This definition can also be applied to augmented matrices corresponding to linear systems with no solutions at all!
There is a very simple process for row-reducing a matrix, working col- umn by column. This process is called Gauss–Jordan elimination or simply Gaussian elimination.
1. If all entries in a given column are zero, then the associated variable is undetermined; make a note of the undetermined variable(s) and then ignore all such columns.
2. Swap rows so that the first entry in the first column is non-zero.
3. Multiply the first row by λ so that this pivot entry is 1.
4. Add multiples of the first row to each other row so that the first entry of every other row is zero.
27
5. Before moving on to step 6, add multiples of the first row any rows above that you have ignored to ensure there are zeros in the column above the current pivot entry.
6. Now ignore the first row and first column and repeat steps 2-5 until the matrix is in RREF.
Example
Reading homework: problem 3.1
3×3 = 9 x1 +5×2 −2×3 = 2 31 x1 +2×2 = 3
First we write the system as an augmented matrix:
28
0 0 3 9 13 2 0 3 R1 ↔R3
1 5 −2 2 ∼ 1 5 −2 2
1 3
203 0039
1 6 0 9
∼ 1 5 −2 2
0039
1 6 0 9
∼ 0 −1 −2 −7
0039
1 6 0 9
∼ 0 1 2 7
0039
1 0 −12 −33
∼ 0127
0039
3R1
R2 =R2 −R1
−R2
R1 =R1 −6R2
1 1 0 −12 −33 3R3
∼ 0127 0013
1 0 0 3
∼ 0 1 2 7
0013
1 0 0 3
∼ 0 1 0 1
0013
R1 =R1 +12R3
R2 =R2 −2R3
Now we’re in RREF and can see that the solution to the system is given by x1 = 3, x2 = 1, and x3 = 3; it happens to be a unique solution. Notice that we kept track of the steps we were taking; this is important for checking your work!
29
Example
R2 −R1 ;R4 −5R1 ∼
1 0 −1 2 −1 1 1 1 −1 2 0 −1 −2 3 −3
5 2 −1 4 1
1 0 −1 2 −1 0 12−33 0 −1 −2 3 −3
0 2 4 −6 6
10−1 2 −1 R3 +R2 ;R4 −2R3 012−33 ∼ 00000
00000
Here the variables x3 and x4 are undetermined; the solution is not unique. Set x3 = λ and x4 = μ where λ and μ are arbitrary real numbers. Then we can write x1 and x2 in terms of λ and μ as follows:
x1 = λ−2μ−1 x2 = −2λ+3μ+3
We can write the solution set with vectors like so:
x1 −1 1 −2 x2= 3 +λ−2+μ 3
x3 0 1 0
x4 0 0 1
This is (almost) our preferred form for writing the set of solutions for a linear system
with many solutions.
Worked examples of Gaussian elimination
Uniqueness of Gauss-Jordan Elimination
Theorem 3.1. Gauss-Jordan Elimination produces a unique augmented ma- trix in RREF.
30
Proof. Suppose Alice and Bob compute the RREF for a linear system but get different results, A and B. Working from the left, discard all columns except for the pivots and the first column in which A and B differ. By Review Problem 1b, removing columns does not affect row equivalence. Call the new, smaller, matrices Aˆ and Bˆ. The new matrices should look this:
ˆ IN a ˆ IN b A= 0 0 andB= 0 0 ,
where IN is an N × N identity matrix and a and b are vectors.
Now if Aˆ and Bˆ have the same solution, then we must have a = b. But
this is a contradiction! Then A = B.
Explanation of the proof
References
Hefferon, Chapter One, Section 1.1 and 1.2 Beezer, Chapter SLE, Section RREF Wikipedia, Row Echelon Form
Wikipedia, Elementary Matrix Operations
Review Problems
1. (Row Equivalence)
(a) Solve the following linear system using Gauss-Jordan elimination:
2×1+5×2 −8×3+2×4+2×5=0 6×1 + 2×2 −10×3 + 6×4 + 8×5 = 6 3×1+6×2 +2×3+3×4+5×5=6 3×1+1×2 −5×3+3×4+4×5=3 6×1+7×2 −3×3+6×4+9×5=9
Be sure to set your work out carefully with equivalence signs ∼ between each step, labeled by the row operations you performed.
31
(b) Check that the following two matrices are row-equivalent:
14710 0−1 8 20 2 9 6 0 and 4 18 12 0
Now remove the third column from each matrix, and show that the resulting two matrices (shown below) are row-equivalent:
1 4 10 0 −1 20 2 9 0 and 4 18 0
Now remove the fourth column from each of the original two ma- trices, and show that the resulting two matrices, viewed as aug- mented matrices (shown below) are row-equivalent:
1 4 7 0 −1 8 296 and 41812
Explain why row-equivalence is never affected by removing columns.
1 4 10
(c) Check that the matrix 3 13 9 has no solutions. If you
4 17 20
remove one of the rows of this matrix, does the new matrix have
any solutions? In general, can row equivalence be affected by removing rows? Explain why or why not.
2. (Gaussian Elimination) Another method for solving linear systems is to use row operations to bring the augmented matrix to row echelon form. In row echelon form, the pivots are not necessarily set to one, and we only require that all entries left of the pivots are zero, not necessarily entries above a pivot. Provide a counterexample to show that row echelon form is not unique.
Once a system is in row echelon form, it can be solved by “back substi- tution.” Write the following row echelon matrix as a system of equa- tions, then solve the system using back-substitution.
2 3 1 6 0 1 1 2 0033
32
3. Explain why the linear system has no solutions:
1 0 3 1 0 1 2 4 0006
For which values of k does the system below have a solution?
x−3y =6
x +3z =−3 2x +ky+(3−k)z= 1
Hint for question 3
33
4 Solution Sets for Systems of Linear Equa- tions
For a system of equations with r equations and k unknowns, one can have a number of different outcomes. For example, consider the case of r equations in three variables. Each of these equations is the equation of a plane in three- dimensional space. To find solutions to the system of equations, we look for the common intersection of the planes (if an intersection exists). Here we have five different possibilities:
1. No solutions. Some of the equations are contradictory, so no solutions exist.
2. Unique Solution. The planes have a unique point of intersection.
3. Line. The planes intersect in a common line; any point on that line
then gives a solution to the system of equations.
4. Plane. Perhaps you only had one equation to begin with, or else all of the equations coincide geometrically. In this case, you have a plane of solutions, with two free parameters.
Planes
5. All of R3. If you start with no information, then any point in R3 is a solution. There are three free parameters.
In general, for systems of equations with k unknowns, there are k + 2 possible outcomes, corresponding to the number of free parameters in the solutions set, plus the possibility of no solutions. These types of “solution sets” are hard to visualize, but luckily “hyperplanes” behave like planes in R3 in many ways.
Pictures and Explanation
Reading homework: problem 4.1
34
4.1 Non-Leading Variables
Variables that are not a pivot in the reduced row echelon form of a linear system are free. We set them equal to arbitrary parameters μ1, μ2, . . .
10 1 −11 Example 0 1 −1 1 1
00000
Here, x1 and x2 are the pivot variables and x3 and x4 are non-leading variables, and thus free. The solutions are then of the form x3 = μ1, x4 = μ2, x2 = 1+μ1 −μ2, x1 = 1 − μ1 + μ2.
The preferred way to write a solution set is with set notation. Let S be the set of solutions to the system. Then:
x 1 −1 1 1 x2 1 1 −1
S= =+μ1 +μ2 x3 0 1 0
x4 0 0 1 Example
We have already seen how to write a linear system of two equations in two unknowns as a matrix multiplying a vector. We can apply exactly the same idea for the above system of three equations in four unknowns by calling
1 0 1 −1 x1 1 x3
M= 01−1 1 , X=x2andV= 1 . 0000×0
4
Then if we take for the product of the matrix M with the vector X of unknowns
1 0 1 −1x1 x1 +x3 −x4 x3
MX= 0 1 −1 1 x2= x2−x3+x4 0000×0
our system becomes simply
4
MX = V . 35
Stare carefully at our answer for the product MX above. First you should notice that each of the three rows corresponds to the left hand side of one of the equations in the system. Also observe that each entry was obtained by matching the entries in the corresponding row of M with the column entries of X. For example, using the second row of M we obtained the second entry of MX
x1
01−11×2 −→x2−x3+x4.
x3 x4
In Lecture 8 we will study matrix multiplication in detail, but you can al- ready try to discover the main rules for yourself by working through Review Question 3 on multiplying matrices by vectors.
Given two vectors we can add them term-by-term:
a1 b1 a2 b2
a1 + b1
a2 + b2
a3 b3
ar br ar + br
We can also multiply a vector by a scalar, like so:
λa1
λa2
a3 + b3 + = . . .
a1 a2 a3
λa3 λ=
. .
ar λar
Then yet another way to write the solution set for the example is:
where
X = X0 + μ1Y1 + μ2Y2
1 −1 1
0 1 0 X0 =1,Y1 = 1 ,Y2 =−1
001 36
Definition Let X and Y be vectors and α and β be scalars. A function f is linear if
f(αX + βY ) = αf(X) + βf(Y )
This is called the linearity property for matrix multiplication.
The notion of linearity is a core concept in this course. Make sure you understand what it means and how to use it in computations!
Example Consider our example system above with
1 0 1 −1 x1 y1
x3 y3 M= 0 1 −1 1 , X=x2andY=y2,
0000 x4 y4
and take for the function of vectors
f(X) = MX .
Now let us check the linearity property for f. The property needs to hold for any scalars α and β, so for simplicity let us concentrate first on the case α = β = 1. This means that we need to compare the following two calculations:
1. First add X +Y, then compute f(X +Y).
2. First compute f(X) and f(Y ), then compute the sum f(X) + f(Y ). The second computation is slightly easier:
x1 +x3 −x4 y1 +y3 −y4 f(X)=MX=x2−x3+x4 andf(Y)=MY =y2−y3+y4,
00
(using our result above). Adding these gives
x1 +x3 −x4 +y1 +y3 −y4 f ( X ) + f ( Y ) = x 2 − x 3 + x 4 + y 2 − y 3 + y 4 .
0
Next we perform the first computation beginning with:
x1 + y1
X + Y = x2 + y2 , x 3 + y 3
x4 + y4 37
from which we calculate
x1 +y2 +x3 +y3 −(x4 +y4) f ( X + Y ) = x 2 + y 2 − ( x 3 + y 3 ) + x 4 + y 4 .
0
Distributing the minus signs and remembering that the order of adding numbers like x1, x2, . . . does not matter, we see that the two computations give exactly the same answer.
Of course, you should complain that we took a special choice of α and β. Actually, to take care of this we only need to check that f(αX) = αf(X). It is your job to explain this in Review Question 1
Later we will show that matrix multiplication is always linear. Then we will know that:
M(αX + βY ) = αMX + βMY
Then the two equations MX = V and X = X0 + μ1Y1 + μ2Y2 together say
that:
MX0 + μ1MY1 + μ2MY2 = V for any μ1,μ2 ∈ R. Choosing μ1 = μ2 = 0, we obtain
MX0 = V .
Here, X0 is an example of what is called a particular solution to the system. Given the particular solution to the system, we can then deduce that μ1MY1 + μ2MY2 = 0. Setting μ1 = 1, μ2 = 0, and recalling the particular
solution M X0 = V , we obtain
MY1 =0. Likewise, setting μ1 = 0, μ1 = 0, we obtain
MY2 =0.
Here Y1 and Y2 are examples of what are called homogeneous solutions to the system. They do not solve the original equation M X = V , but instead its associated homogeneous system of equations MY = 0.
38
Example Consider the linear system with the augmented matrix we’ve been working with.
x +z −w = 1 y −z+w=1
Recall that the system has the following solution set:
x 1 −1 1 1 x2 1 1 −1
S= =+μ1 +μ2 x3 0 1 0
x4 0 0 1 x1 1
Then MX0 = V says that x2 = 1 solves the original system of equations, x 3 0
x4 0
which is certainly true, but this is not the only solution.
x1 −1
x3 1
MY1 = 0 says that x2 = 1 solves the homogeneous system.
x4 0 x1 1
x3 0
MY2 = 0 says that x2 = −1 solves the homogeneous system.
x4 1
Notice how adding any multiple of a homogeneous solution to the particular solution
yields another particular solution.
Definition Let M a matrix and V a vector. Given the linear system MX = V , we call X0 a particular solution if M X0 = V . We call Y a homogeneous solution if MY = 0. The linear system
MX = 0
is called the (associated) homogeneous system.
If X0 is a particular solution, then the general solution to the system is1: 1ThenotationS={X0+Y :MY =0}isread,“SisthesetofallX0+Y suchthat
MY = 0,” and means exactly that. Sometimes a pipe | is used instead of a colon. 39
S = {X0 + Y : MY = 0}
In other words, the general solution = particular + homogeneous.
Reading homework: problem 4.2
References
Hefferon, Chapter One, Section I.2 Beezer, Chapter SLE, Section TSS Wikipedia, Systems of Linear Equations
Review Problems
1. Let f(X) = MX where
1 0 1 −1 x1
M= 0 1 −1 1 andX=x2. x 3
0000 x4
Suppose that α is any number. Compute the following four quantities: αX , f(X) , αf(X) and f(αX) .
Check your work by verifying that
αf(X) = f(αX) .
Now explain why the result checked in the Lecture, namely f (X + Y ) = f (X ) + f (Y ) ,
and your result f(αX) = αf(X) together imply f(αX + βY ) = αf(X) + βf(Y ) .
2. Write down examples of augmented matrices corresponding to each of the five types of solution sets for systems of equations with three unknowns.
40
3. Let
a1 a1 ··· a1 x1 12k
a2 a2 ··· a2 x2
12 k M = . . . , X = .
ar1 ar2 ··· ark xk
Propose a rule for MX so that MX = 0 is equivalent to the linear
system:
a 1 1 x 1 + a 12 x 2 · · · + a 1k x k = 0 a 21 x 1 + a 2 2 x 2 · · · + a 2k x k = 0
. . . . a r1 x 1 + a r2 x 2 · · · + a rk x k = 0
Show that your rule for multiplying a matrix by a vector obeys the linearity property.
Note that in this problem, x2 does not denote the square of x. Instead x1 , x2 , x3 , etc… denote different variables. Although confusing at first, this notation was invented by Albert Einstein who noticed that quantities like a21x1 + a2x2 · · · + a2kxk could be written in summation notation as kj=1 a2jxj. Here j is called a summation index. Einstein observed that you could even drop the summation sign and simply write a2jxj.
Problem 3 hint
4. Use the rule you developed in the problem 3 to compute the following products
1 2 3 41 5 6 7 82
9 10 11 123
13 14 15 16 4
1 0
0 1
0 0 014 0 0 014
0 0
00001 62 41
1 0 0 21 0 0 0 1 035
1 42 97 2 −23 460 0 1 3 1 0 33 0
0 11 π 1 0 46 290
−98 12 0 33 99 98 √ 0
log2 0 2 0 e 23 0 0
1 2 3 4 5 60 1
7 8 9 10 11 120 13 14 15 16 17 18 0
0
Now that you are good at multiplying a matrix with a column vector, try your hand at a product of two matrices
1 0 0
1 2 3 4 5 60 1 0 0 0 1 7 8 9 101112000
131415161718 000 000
Hint, to do this problem view the matrix on the right as three column vectors next to one another.
5. The standard basis vector ei is a column vector with a one in the ith row, and zeroes everywhere else. Using the rule for multiplying a matrix times a vector in problem 3, find a simple rule for multiplying Mei, where M is the general matrix defined there.
42
5 Vectors in Space, n-Vectors
In vector calculus classes, you encountered three-dimensional vectors. Now
we will develop the notion of n-vectors and learn some of their properties. Overview
We begin by looking at the space Rn, which we can think of as the space of points with n coordinates. We then specify an origin O, a favorite point in Rn. Now given any other point P, we can draw a vector v from O to P. Just as in R3, a vector has a magnitude and a direction.
If O has coordinates (o1, . . . , on) and p has coordinates (p1, . . . , pn), then p1 −o1
p2 −o2
the components of the vector v are . . This construction allows us
. pn − on
to put the origin anywhere that seems most convenient in Rn, not just at the point with zero coordinates:
Remark A quick note on points versus vectors. We might sometimes interpret a point and a vector as the same object, but they are slightly different concepts and should be treated as such. For more details, see Appendix D
Do not be confused by our use of a superscript to label components of a vector. Here v2 denotes the second component of a vector v, rather than a number v squared!
43
Most importantly, we can add vectors and multiply vectors by a scalar: Definition Given two vectors a and b whose components are given by
their sum is
Given a scalar λ, the scalar multiple λa1
Example Let
Then, for example
4 a=2 and b=3.
a1 b1
. . a=. and b=.
an
bn
a1 +b1 .
a+b= . . an + bn
. λa= . .
λan
1
3 2
41
5 −5
a+b=5 and 3a−2b=0. 5 5
5 10
Notice that these are the same rules we saw in Lecture 4! In Lectures 1-4, we thought of a vector as being a list of numbers which captured information about a linear system. Now we are thinking of a vector as a magnitude and a direction in Rn, and luckily the same rules apply.
A special vector is the zero vector connecting the origin to itself. All of its components are zero. Notice that with respect to the usual notions of Euclidean geometry, it is the only vector with zero magnitude, and the only one which points in no particular direction. Thus, any single vector
44
determines a line, except the zero-vector. Any scalar multiple of a non-zero vector lies in the line determined by that vector.
The line determined by a non-zero vector v through a point P can be
11 2 0
written as {P + tv|t ∈ R}. For example,
a line in 4-dimensional space parallel to the x-axis.
40 Given two non-zero vectors u, v, they will usually determine a plane,
+t t∈R describes 3 0
unless both vectors are in the same line. In this case, one of the vectors can be realized as a scalar multiple of the other. The sum of u and v corresponds to laying the two vectors head-to-tail and drawing the connecting vector. If u and v determine a plane, then their sum lies in plane determined by u and v.
The plane determined by two vectors u and v can be written as {P + su + tv|s, t ∈ R} .
Example
0
1
1 0 0
500
9 0 0 45
3 1
1 0
4
0
0 +s +t s,t∈R
describes a plane in 6-dimensional space parallel to the xy-plane. Parametric Notation
We can generalize the notion of a plane:
Definition A set of k vectors v1,…,vk in Rn with k ≤ n determines a k-dimensional hyperplane, unless any of the vectors vi lives in the same hy- perplane determined by the other vectors. If the vectors do determine a k-dimensional hyperplane, then any point in the hyperplane can be written as:
k {P+λivi|λi ∈R}
i=1
When the dimension k is not specified, one usually assumes that k = n − 1
for a hyperplane inside Rn.
5.1 Directions and Magnitudes
Consider the Euclidean length of a vector:
n
∥v∥= (v1)2 +(v2)2 +···(vn)2 = (vi)2 . i=1
Using the Law of Cosines, we can then figure out the angle between two vectors. Given two vectors v and u that span a plane in Rn, we can then connect the ends of v and u with the vector v − u.
46
Then the Law of Cosines states that:
∥v−u∥2 =∥u∥2 +∥v∥2 −2∥u∥∥v∥cosθ
Then isolate cos θ:
∥v−u∥2 −∥u∥2 −∥v∥2
= (v1 −u1)2 +···+(vn −un)2 −((u1)2 +···+(un)2) −((v1)2 +···+(vn)2)
Thus,
= −2u1v1 −···−2unvn ∥u∥∥v∥cosθ=u1v1 +···+unvn.
Note that in the above discussion, we have assumed (correctly) that Eu- clidean lengths in Rn give the usual notion of lengths of vectors in the plane. This now motivates the definition of the dot product.
Definition The dot product of two vectors u = . and v = . is un vn
u v=u1v1+···+unvn. The length or norm or magnitude of a vector
∥v∥=√v v.
The angle θ between two vectors is determined by the formula
u v=∥u∥∥v∥cosθ.
The dot product has some important properties: 1. The dot product is symmetric, so
2. Distributive so
uv=vu,
u (v+w)=u v+u w, 47
u1 v1 . .
3. Bilinear, which is to say, linear in both u and v. Thus u (cv+dw)=cu v+du w,
and
4. Positive Definite:
(cu+dw) v=cu v+dw v. u u≥0,
and u u = 0 only when u itself is the 0-vector.
There are, in fact, many different useful ways to define lengths of vectors. Notice in the definition above that we first defined the dot product, and then defined everything else in terms of the dot product. So if we change our idea of the dot product, we change our notion of length and angle as well. The dot product determines the Euclidean length and angle between two vectors.
Other definitions of length and angle arise from inner products, which have all of the properties listed above (except that in some contexts the positive definite requirement is relaxed). Instead of writing for other inner products, we usually write ⟨u, v⟩ to avoid confusion.
Reading homework: problem 5.1
Example Consider a four-dimensional space, with a special direction which we will call “time”. The Lorentzian inner product on R4 is given by ⟨u, v⟩ = u1v1 + u2v2 + u3v3 −u4v4. This is of central importance in Einstein’s theory of special relativity, but note that it is not positive definite.
As a result, the “squared-length” of a vector with coordinates x,y,z and t is ∥v∥2 = x2 + y2 + z2 − t2. Notice that it is possible for ∥v∥2 ≤ 0 for non-vanishing v!
Theorem 5.1 (Cauchy-Schwarz Inequality). For non-zero vectors u and v with an inner-product ⟨ , ⟩,
|⟨u,v⟩| ≤1 ∥u∥ ∥v∥
Proof. The easiest proof would use the definition of the angle between two vectors and the fact that cos θ ≤ 1. However, strictly speaking speaking we did not check our assumption that we could apply the Law of Cosines to the
48
Euclidean length in Rn. There is, however a simple algebraic proof. Let α be any real number and consider the following positive, quadratic polynomial in α
0 ≤ ⟨u + αv, u + αv⟩ = ⟨u, u⟩ + 2α⟨u, v⟩ + α2⟨v, v⟩ .
You should carefully check for yourself exactly which properties of an inner product were used to write down the above inequality!
Next, a tiny calculus computation shows that any quadratic aα2 + 2bα + c takes its minimal value c − b2 when α = − b . Applying this to the above
quadratic gives
Proof.
aa
⟨u, v⟩2 0≤⟨u,u⟩− ⟨v,v⟩ .
Now it is easy to rearrange this inequality to reach the Cauchy–Schwarz one above.
Theorem 5.2 (Triangle Inequality). Given vectors u and v, we have: ∥u + v∥ ≤ ∥u∥ + ∥v∥
∥u+v∥2 =
= u u+2u v+v v
= ∥u∥2 +∥v∥2 +2∥u∥∥v∥cosθ
= (∥u∥+∥v∥)2 +2∥u∥∥v∥(cosθ−1)
≤ (∥u∥ + ∥v∥)2
(u+v) (u+v)
Then the square of the left-hand side of the triangle inequality is ≤ the right-hand side, and both sides are positive, so the result is true.
The triangle inequality is also “self-evident” examining a sketch of u, v and u + v
49
Example Let
so that
Since
we have
as predicted by the triangle inequality.
Notice also that a b = 1.4+2.3+3.2+4.1 = 20 < 30. 30 = 30 = ∥a∥∥b∥ in
accordance with the Cauchy–Schwarz inequality.
Reading homework: problem 5.2
References
Hefferon: Chapter One.II
Beezer: Chapter V, Section VO, Subsection VEASM Beezer: Chapter V, Section O, Subsections IP-N Relevant Wikipedia Articles:
1 4
3 2 a=2 and b=3,
41
a a=b b=1+22 +32 +42 =30
⇒∥a∥=√30=∥b∥and ∥a∥+∥b∥2 =(2√30)2 =120. 5
5 a + b = 5 ,
5
∥a+b∥2 =52 +52 +52 +52 =100<120=∥a∥+∥b∥2
√√
Dot Product
Inner Product Space Minkowski Metric
50
Review Problems
1.
When he was young, Captain Conundrum mowed lawns on weekends to help pay his college tuition bills. He charged his customers according to the size of their lawns at a rate of 5¢ per square foot and meticulously kept a record of the areas of their lawns in an ordered list:
A = (200, 300, 50, 50, 100, 100, 200, 500, 1000, 100) .
He also listed the number of times he mowed each lawn in a given year,
for the year 1988 that ordered list was
f = (20,1,2,4,1,5,2,1,10,6).
(a) Pretend that A and f are vectors and compute A f. (b) What quantity does the dot product A f measure?
(c) How much did Captain Conundrum earn from mowing lawns in 1988? Write an expression for this amount in terms of the vectors A and f.
(d) Suppose Captain Conundrum charged different customers differ- ent rates. How could you modify the expression in part 1c to compute the Captain’s earnings?
(2) Find the angle between the diagonal of the unit square in R2 and one of the coordinate axes.
(3) Find the angle between the diagonal of the unit cube in R3 and one of the coordinate axes.
(n) Find the angle between the diagonal of the unit (hyper)-cube in Rn and one of the coordinate axes.
(∞) What is the limit as n → ∞ of the angle between the diagonal of the unit (hyper)-cube in Rn and one of the coordinate axes?
2.
cosθ sinθ x
3. Consider the matrix M = −sinθ cosθ and the vector X = (a) Sketch X and MX in R2 for several values of X and θ.
(b) Compute ||MX|| for arbitrary values of X and θ. ||X ||
51
y .
(c) Explain your result for (b) and describe the action of M geomet- rically.
4. Suppose in R2 I measure the x direction in inches and the y direction in miles. Approximately what is the real-world angle between the vectors
0 1
1 and 1 ? What is the angle between these two vectors according
to the dot-product? Give a definition for an inner product so that the angles produced by the inner product are the actual angles between vectors.
5. (Lorentzian Strangeness). For this problem, consider Rn with the Lorentzian inner product and metric defined above.
(a) Find a non-zero vector in two-dimensional Lorentzian space-time with zero length.
(b) Find and sketch the collection of all vectors in two-dimensional Lorentzian space-time with zero length.
(c) Find and sketch the collection of all vectors in three-dimensional Lorentzian space-time with zero length.
The Story of Your Life
52
6 Vector Spaces
Thus far we have thought of vectors as lists of numbers in Rn. As it turns out, the notion of a vector applies to a much more general class of structures than this. The main idea is to define vectors based on their most important properties. Once complete, our new definition of vectors will include vectors in Rn, but will also cover many other extremely useful notions of vectors. We do this in the hope of creating a mathematical structure applicable to a wide range of real-world problems.
The two key properties of vectors are that they can be added together and multiplied by scalars. So we make the following definition.
Definition A vector space (over R) is a set V with two operations + and · satisfying the following properties for all u, v ∈ V and c, d ∈ R:
(+i) (+ii)
(+iii)
(+iv)
(+v)
(· i) (· ii)
(· iii)
(· iv) (·v)
(Additive Closure) u + v ∈ V . (Adding two vectors gives a vector.) (Additive Commutativity) u + v = v + u. (Order of addition doesn’t
matter.)
(Additive Associativity) (u + v) + w = u + (v + w) (Order of adding many vectors doesn’t matter.)
(Zero) There is a special vector 0V ∈ V such that u+0V = u for all u in V .
(Additive Inverse) For every u ∈ V there exists w ∈ V such that u + w = 0V .
(Multiplicative Closure) c · v ∈ V . (Scalar times a vector is a vector.) (Distributivity) (c+d)·v = c·v+d·v. (Scalar multiplication distributes
over addition of scalars.)
(Distributivity) c·(u+v) = c·u+c·v. (Scalar multiplication distributes over addition of vectors.)
(Associativity) (cd) · v = c · (d · v). (Unity)1·v=vforallv∈V.
53
Examples of each rule
Remark Don’t confuse the scalar product · with the dot product . The scalar product is a function that takes a vector and a number and returns a vector. (In notation, this can be written · : R × V → V .) On the other hand, the dot product takes two vectors and returns a number. (In notation: : V × V → R.)
Once the properties of a vector space have been verified, we’ll just write scalar multiplication with juxtaposition cv = c · v, though, to avoid confusing the notation.
Remark It isn’t hard to devise strange rules for addition or scalar multiplication that break some or all of the rules listed above.
Example of a vector space
One can also find many interesting vector spaces, such as the following.
Example
V = {f | f : N → R}
Here the vector space is the set of functions that take in a natural number n and return a real number. The addition is just addition of functions: (f1+f2)(n) = f1(n)+f2(n). Scalar multiplication is just as simple: c · f (n) = cf (n).
We can think of these functions as infinite sequences: f(0) is the first term, f(1) is the second term, and so on. Then for example the function f(n) = n3 would look like this:
f = {0,1,8,27,...,n3,...}. Thinking this way, V is the space of all infinite sequences.
Let’s check some axioms.
(+i) (Additive Closure) f1(n) + f2(n) is indeed a function N → R, since the sum of two real numbers is a real number.
(+iv) (Zero) We need to propose a zero vector. The constant zero function g(n) = 0 works because then f(n) + g(n) = f(n) + 0 = f(n).
The other axioms that should be checked come down to properties of the real numbers.
54
Reading homework: problem 6.1
Example Another very important example of a vector space is the space of all differ-
entiable functions:
The addition is point-wise
as is scalar multiplication
d f|f:R→R,dxfexists .
(f + g)(x) = f (x) + g(x) , c · f (x) = cf (x) .
From calculus, we know that the sum of any two differentiable functions is dif-
ferentiable, since the derivative distributes over addition. A scalar multiple of a func-
tion is also differentiable, since the derivative commutes with scalar multiplication
( d (cf) = c d f). The zero function is just the function such that 0(x) = 0 for ev- dx dx
ery x. The rest of the vector space properties are inherited from addition and scalar multiplication in R.
In fact, the set of functions with at least k derivatives is always a vector space, as is the space of functions with infinitely many derivatives.
Vector Spaces Over Other Fields Above, we defined vector spaces over the real numbers. One can actually define vector spaces over any field. A field is a collection of “numbers” satisfying a number of properties.
One other example of a field is the complex numbers,
C=x+iy|i2 =−1,x,y∈R.
In quantum physics, vector spaces over C describe all possible states a system of particles can have.
For example,
λ V= μ|λ,μ∈C
1 0
1 describes spin “down”. Other states, like −i are permissible, since the base field is the complex
numbers.
Complex numbers are extremely useful because of a special property that they
enjoy: every polynomial over the complex numbers factors into a product of linear
55
describes states of an electron, where 0 describes spin “up” and i
polynomials. For example, the polynomial x2 + 1 doesn’t factor over the real numbers, but over the complex numbers it factors into (x+i)(x−i). This property ends up having very far-reaching consequences: often in mathematics problems that are very difficult when working over the real numbers become relatively simple when working over the complex numbers. One example of this phenomenon occurs when diagonalizing matrices, which we will learn about later in the course.
Another useful field is the rational numbers Q. This is field is important in com- puter algebra: a real number given by an infinite string of numbers after the decimal point can’t be stored by a computer. So instead rational approximations are used. Since the rationals are a field, the mathematics of vector spaces still apply to this special case.
In this class, we will work mainly over the real numbers and the complex numbers, and occasionally work over Z2 = {0, 1} where 1 + 1 = 0. For more on fields in general, see Appendix E.3; however the full story of fields is typically covered in a class on abstract algebra or Galois theory.
References
Hefferon, Chapter One, Section I.1 Beezer, Chapter VS, Section VS Wikipedia:
Vector Space Field
Spin 12
Galois Theory
Review Problems
x 2
1. CheckthatV = y |x,y∈R =R withtheusualadditionand
scalar multiplication is a vector space.
2. Check that the complex numbers C = {x+iy | x,y ∈ R} form a
vector space over C. Make sure you state carefully what your rules for 56
vector addition and scalar multiplication are. Also, explain what would happen if you used R as the base field (try comparing to problem 1).
3. (a)
(b)
Consider the set of convergent sequences, with the same addi- tion and scalar multiplication that we defined for the space of sequences:
V= f|f:N→R,limf∈R n→∞
Is this still a vector space? Explain why or why not.
Now consider the set of divergent sequences, with the same addi- tion and scalar multiplication as before:
V = f | f : N → R, lim f does not exist or is ± ∞ n→∞
Is this a vector space? Explain why or why not.
4. Consider the set of 2 × 4 matrices:
a b c d
V = e f g h |a,b,c,d,e,f,g,h∈C
Propose definitions for addition and scalar multiplication in V . Identify the zero vector in V , and check that every matrix has an additive inverse.
5. Let P3R be the set of polynomials with real coefficients of degree three or less.
Propose a definition of addition and scalar multiplication to make P3R a vector space.
Identify the zero vector, and find the additive inverse for the vector −3−2x+x2.
Show that P3R is not a vector space over C. Propose a small change to the definition of P3R to make it a vector space over C.
Problem 5 hint
57
7 Linear Transformations
Recall that the key properties of vector spaces are vector addition and scalar multiplication. Now suppose we have two vector spaces V and W and a map L between them:
L: V → W
Now, both V and W have notions of vector addition and scalar multiplication. It would be ideal if the map L preserved these operations. In other words, if adding vectors and then applying L were the same as applying L to two vectors and then adding them. Likewise, it would be nice if, when multiplying by a scalar, it didn’t matter whether we multiplied before or after applying L. In formulas, this means that for any u,v ∈ V and c ∈ R:
L(u+v) = L(u)+L(v) L(cv) = cL(v)
Combining these two requirements into one equation, we get the definition of a linear function or linear transformation.
Definition AfunctionL:V →W islinearifforallu,v∈V andr,s∈R we have
L(ru + sv) = rL(u) + sL(v)
Notice that on the left the addition and scalar multiplication occur in V , while on the right the operations occur in W . This is often called the linearity property of a linear transformation.
Reading homework: problem 7.1
Example Take L: R3 → R3 defined by:
x x + y
L y = y + z z0
x a
Call u = y , v = b. Now check linearity. zc
58
On the other hand,
rL(u)+sL(v) =
x rLy+sLb
matrix like so:
x x+y 1 1 0x Ly=y+z=0 1 1y
z0000z Reading homework: problem 7.2
59
L(ru+sv) =
x a Lry+sb
zc rx sa
= L ry + sb rz sc
rx + sa
= Lry+sb
rz + sx
rx + sa + ry + sb
= ry+sb+rz+sx
0
x+y a+b
= ry+z+sb+c
00
rx + ry sa + sb
= ry+rz+sb+sc
00
rx + sa + ry + sb
= ry+sb+rz+sx
0
Then the two sides of the linearity requirement are equal, so L is a linear transforma- tion.
Remark We can write the linear transformation L in the previous example using a
a zc
We previously checked that matrix multiplication on vectors obeyed the rule M (ru+ sv) = rMu + sMv, so matrix multiplication is linear. As such, our check on L was guaranteed to work. In fact, matrix multiplication on vectors is a linear transformation.
A linear and non-linear example
Example Let V be the vector space of polynomials of finite degree with standard addition and scalar multiplication.
V = {a0 + a1x + · · · + anxn|n ∈ N, ai ∈ R}
Let L: V → V be the derivative d . For p1 and p2 polynomials, the rules of differen-
tiation tell us that
dx
d(rp1+sp2)=rdp1 +sdp2 dx dxdx
Thus, the derivative is a linear function from the set of polynomials to itself. We can represent a polynomial as a “semi-infinite vector”, like so:
Then we have:
a0 a1
. .
a 0 + a 1 x + · · · + a n x n ←→ a n 0
0 .
. dn n−1.
dx(a0 +a1x+···+anx )=a1 +2a2x+···+nanx One could then write the derivative as an “infinite matrix”:
←→nan 0
d dx
0 1 0 0 ···
0 0 2 0 ··· ←→0 0 0 3 ···
. . . . ....
60
.
a1 2a2
0
. .
Foreshadowing Dimension. You probably have some intuitive notion of what dimen- sion means, though we haven’t actually defined the idea of dimension mathematically yet. Some of the examples of vector spaces we have worked with have been finite dimensional. (For example, Rn will turn out to have dimension n.) The polynomial example above is an example of an infinite dimensional vector space.
Roughly speaking, dimension is the number of independent directions available. To figure out dimension, I stand at the origin, and pick a direction. If there are any vectors in my vector space that aren’t in that direction, then I choose another direction that isn’t in the line determined by the direction I chose. If there are any vectors in my vector space not in the plane determined by the first two directions, then I choose one of them as my next direction. In other words, I choose a collection of independent vectors in the vector space. The size of a minimal set of independent vectors is the dimension of the vector space.
For finite dimensional vector spaces, linear transformations can always be repre- sented by matrices. For that reason, we will start studying matrices intensively in the next few lectures.
References
Hefferon, Chapter Three, Section II. (Note that Hefferon uses the term ho- momorphism for a linear map. ‘Homomorphism’ is a very general term which in mathematics means ‘Structure-preserving map.’ A linear map preserves the linear structure of a vector space, and is thus a type of homomorphism.)
Beezer, Chapter LT, Section LT, Subsections LT, LTC, and MLT. Wikipedia:
Linear Transformation Dimension
Review Problems
1. Show that the pair of conditions:
(i) L(u+v)=L(u)+L(v) (ii) L(cv) = cL(v)
is equivalent to the single condition: 61
(iii) L(ru + sv) = rL(u) + sL(v) .
Your answer should have two parts. Show that (i,ii)⇒(iii), and then
show that (iii)⇒(i,ii).
2. Let Pn be the space of polynomials of degree n or less in the variable t. Suppose L is a linear transformation from P2 → P3 such that L(1) = 4, L(t) = t3, and L(t2) = t − 1.
Find L(1+t+2t2).
FindL(a+bt+ct2).
Findallvaluesa,b,csuchthatL(a+bt+ct2)=1+3t+2t3.
Hint
3. Show that integration is a linear transformation on the vector space of polynomials. What would a matrix for integration look like? Be sure to think about what to do with the constant of integration.
Finite degree example
4. Letz∈C. Recallthatwecanexpressz=a+biwherea,b∈R,andwe can form the complex conjugate of z by taking z = a−bi (note that this is unique since z = z). So we can define a function from c: R2 → R2 which sends (a, b) → (a, −b), and it is clear that c agrees with complex conjugation.
(a) Show that c is a linear map over R (i.e. scalars in R). (b) Show that z is not linear over C.
62
8 Matrices
Definition An r × k matrix M = (mij) for i = 1,...,r;j = 1,...,k is a
rectangular array of real (or complex) numbers:
m1 m1 ··· m1 12k
m2 m2 ··· m2 12 k
M = . . .
m r1 m r2 · · · m rk
The numbers mij are called entries. The superscript indexes the row of the matrix and the subscript indexes the column of the matrix in which mij appears2.
It is often useful to consider matrices whose entries are more general than the real numbers, so we allow that possibility.
An r × 1 matrix v = (v1r) = (vr) is called a column vector, written
v1
v2 v = . .
. vr
A 1 × k matrix v = (vk1) = (vk) is called a row vector, written v=v1 v2 ··· vk.
Matrices are a very useful and efficient way to store information:
Example In computer graphics, you may have encountered image files with a .gif extension. These files are actually just matrices: at the start of the file the size of the matrix is given, and then each entry of the matrix is a number indicating the color of a particular pixel in the image.
The resulting matrix then has its rows shuffled a bit: by listing, say, every eighth row, then a web browser downloading the file can start displaying an incomplete version of the picture before the download is complete.
Finally, a compression algorithm is applied to the matrix to reduce the size of the file.
2This notation was first introduced by Albert Einstein. 63
Adjacency Matrix Example
Example Graphs occur in many applications, ranging from telephone networks to airline routes. In the subject of graph theory, a graph is just a collection of vertices and some edges connecting vertices. A matrix can be used to indicate how many edges attach one vertex to another.
For example, the graph pictured above would have the following matrix, where mij indicates the number of edges between the vertices labeled i and j:
1 2 1 1 M=2 0 1 0 1 1 0 1
1013
This is an example of a symmetric matrix, since mij = mji .
The space of r × k matrices Mkr is a vector space with the addition and scalar multiplication defined as follows:
M + N = ( m ij ) + ( n ij ) = ( m ij + n ij ) rM = r(mij) = (rmij)
In other words, addition just adds corresponding entries in two matrices, and scalar multiplication multiplies every entry. Notice that M1n = Rn is just the vector space of column vectors.
64
Recall that we can multiply an r × k matrix by a k × 1 column vector to produce a r × 1 column vector using the rule
k
M V = m ij v j .
j=1
This suggests a rule for multiplying an r×k matrix M by a k×s matrix N: our k×s matrix N consists of s column vectors side-by-side, each of dimension k × 1. We can multiply our r × k matrix M by each of these s column vectors using the rule we already know, obtaining s column vectors each of dimension r × 1. If we place these s column vectors side-by-side, we obtain an r × s matrix MN.
That is, let
n k1 n k2 · · · n ks and call the columns N1 through Ns:
n1 n1 ··· n1 12s
n2 n2 ··· n2 12 s
N = . . .
n1 12s
n1 n1
n2 n2 n2
1 2 s N1 = . ,N2 = . ,...,Ns = . .
nk1 nk2 nks
Then
MN =MN1 N2 ··· Ns=MN1 MN2 ··· MNs
|||||| ||||||
A more concise way to write this rule is: If M = (mij) for i = 1,...,r;j = 1,...,k and N = (nij) for i = 1,...,k;j = 1,...,s, then MN = L where L = (lij) for i = i,...,r;j = 1,...,s is given by
This rule obeys linearity.
k
l ij = m ip n pj .
p=1
65
Notice that in order for the multiplication to make sense, the columns and rows must match. For an r×k matrix M and an s×m matrix N, then to make the product MN we must have k = s. Likewise, for the product NM, it is required that m = r. A common shorthand for keeping track of the sizes of the matrices involved in a given product is:
r × k × k × m = r × m
Example Multiplying a (3 × 1) matrix and a (1 × 2) matrix yields a (3 × 2) matrix.
1 1·2 1·3 2 3 32 3=3·2 3·3=6 9 2 2·22·3 46
Reading homework: problem 8.1
Recall that r × k matrices can be used to represent linear transformations
Rk → Rr via
k
M V = m ij v j ,
j=1
which is the same rule we use when we multiply an r×k matrix by a k×1 vector to produce an r × 1 vector.
Likewise, we can use a matrix N = (nij ) to represent a linear transforma-
tion via
sNr L:Mk −→Mk
s
L ( M ) il = n ij m jl .
j=1
This is the same as the rule we use to multiply matrices. In other words,
L(M) = NM is a linear transformation.
Matrix Terminology The entries mi are called diagonal, and the set {m1, m2, ...} is called the diagonal of the matrix.
Any r × r matrix is called a square matrix . A square matrix that is zero for all non-diagonal entries is called a diagonal matrix.
66
The r × r diagonal matrix with all diagonal entries equal to 1 is called the identity matrix, Ir, or just 1. An identity matrix looks like
1 0 0 ··· 0
0 1 0 ··· 0
0 0 1 ··· 0 .
. . . .. . ... . .
000···1 The identity matrix is special because
IrM = MIk = M
for all M of size r×k.
In the matrix given by the product of matrices above, the diagonal entries
are 2 and 9. An example of a diagonal matrix is
2 0 0 0 3 0.
000
Definition The transpose of an r × k matrix M = (mij ) is the k × r matrix with entries
M T = ( m ̄ ij ) A matrix M is symmetric if M = MT.
2 5 6T 2 1 Example 1 3 4 =5 3
withm ̄ij =mji.
Observations
64
Reading homework: problem 8.2
Only square matrices can be symmetric.
The transpose of a column vector is a row vector, and vice-versa.
Taking the transpose of a matrix twice does nothing. i.e., (M T )T = M .
67
Theorem 8.1 (Transpose and Multiplication). Let M,N be matrices such that MN makes sense. Then (MN)T = NT MT .
The proof of this theorem is left to Review Question 2.
Sometimes matrices do not share the properties of regular numbers, watch this video to see why:
Matrices do not Commute
Many properties of matrices following from the same property for real numbers. Here is an example.
Example Associativity of matrix multiplication. We know for real numbers x, y and z that
x(yz) = (xy)z ,
i.e. the order of bracketing does not matter. The same property holds for matrix
multiplication, let us show why. Suppose M = mi, N = nj and R = rk jkl
are, respectively, m × n, n × r and r × t matrices. Then from the rule for matrix multiplication we have
nr
MN = minj and NR = njrk. jk kl
j=1 k=1
So first we compute
rnrnrn
(MN)R = minjrk = minjrk = minjrk.
jkl jkl jkl k=1 j=1 k=1 j=1 k=1 j=1
In the first step we just wrote out the definition for matrix multiplication, in the second step we moved summation symbol outside the bracket (this is just the distributive property x(y+z) = xy+xz for numbers) and in the last step we used the associativity property for real numbers to remove the square brackets. Exactly the same reasoning shows that
nrrnrn
M(NR) = minjrk = minjrk = minjrk.
j kl jkl jkl j=1 k=1 k=1 j=1 k=1 j=1
This is the same as above so we are done. As a fun remark, note that Einstein would simply have written (MN)R = (mijnjk)rlk = mijnjkrlk = mij(njkrlk) = M(NR).
68
References
Hefferon, Chapter Three, Section IV, parts 1-3. Beezer, Chapter M, Section MM.
Wikipedia:
Matrix Multiplication Review Problems
1. Compute the following matrix products
1 2 1−2 4 −1 2
1 2
52
4 5 2 2 − 3 3 ,
7 8 2 − 1 2 − 1
1 2 3 4 5 3 ,
4 5
3 1 2 3 4 5 ,
52
4 5 2 2 − 3 3 4 5 2 ,
7 8 2 − 1 2 − 1 7 8 2 2 1 2 1 21 2 1 2 1
4 5
x y
x 02121 01212
211
z1 2 1y ,
112 z
0121202121 ,
0212101212 00002 00001
1 33
1 2 1−2 4 −11 2 1 33
−2 4 −14 2 −21 2 1 33 33
5252
2 − 3 3 6 3 − 3 4 5 2 .
−1 2 −1 12 −16 10 7 8 2 33
2. Let’s prove the theorem (MN)T = NT MT .
Note: the following is a common technique for proving matrix identities.
69
(a) Let M = (mij) and let N = (nij). Write out a few of the entries of each matrix in the form given at the beginning of this chapter.
(b) Multiply out MN and write out a few of its entries in the same form as in part a. In terms of the entries of M and the entries of N, what is the entry in row i and column j of MN?
(c) Take the transpose (MN)T and write out a few of its entries in the same form as in part a. In terms of the entries of M and the entries of N, what is the entry in row i and column j of (MN)T ?
(d) Take the transposes NT and MT and write out a few of their entries in the same form as in part a.
(e) Multiply out N T M T and write out a few of its entries in the same form as in part a. In terms of the entries of M and the entries of N, what is the entry in row i and column j of NTMT?
(f) Show that the answers you got in parts c and e are the same.
3. Let M be any m×n matrix. Show that MTM and MMT are sym- metric. (Hint: use the result of the previous problem.) What are their sizes?
x1 y1 . .
productx y=xT 1y.
Hint
Problem hint
70
4. Letx=.andy=.becolumnvectors. Showthatthedot xn yn
5. Above, we showed that left multiplication by an r × s matrix N was sNr
a linear transformation Mk −→ Mk . Show that right multiplication sRs
by a k × m matrix R is a linear transformation Mk −→ Mm. In other words, show that right matrix multiplication obeys linearity.
6. Explain what happens to a matrix when:
(a) You multiply it on the left by a diagonal matrix. (b) You multiply it on the right by a diagonal matrix.
Give a few simple examples before you start explaining.
71
9 Properties of Matrices 9.1 Block Matrices
It is often convenient to partition a matrix M into smaller matrices called blocks, like so:
1 2 3 1 4560 AB
M=7 8 9 1= C D
0120
1 2 3 1
HereA=4 5 6,B=0,C=0 1 2,D=(0).
789 1
The blocks of a block matrix must fit together to form a rectangle. So
B A C B
D C makes sense, but D A does not.
Reading homework: problem 9.1
There are many ways to cut up an n × n matrix into blocks. Often context or the entries of the matrix will suggest a useful way to divide the matrix into blocks. For example, if there are large blocks of zeros in a matrix, or blocks that look like an identity matrix, it can be useful to partition the matrix accordingly.
Matrix operations on block matrices can be carried out by treating the blocks as matrix entries. In the example above,
2 A BA B M=CDCD
A2+BC AB+BD = CA+DC CB+D2
72
9.2
30 37 44 4 66 81 96 10 102 127 152 16
4 10 16 2
This is exactly M2.
The Algebra of Square Matrices
Computing the individual blocks, we get:
A2+BC =
AB+BD =
CA+DC = CB+D2 =
30 37 44
66 81 96
102 127 152 4
10 16
18 21 24
(2)
Assembling these pieces into a block matrix gives:
Not every pair of matrices can be multiplied. When multiplying two matrices, the number of rows in the left matrix must equal the number of columns in the right. For an r×k matrix M and an s×l matrix N, then we must have k = s.
This is not a problem for square matrices of the same size, though. Two n×n matrices can be multiplied in either order. For a single matrix M ∈ Mn, wecanformM2 =MM,M3 =MMM,andsoon,anddefineM0 =In,the identity matrix.
As a result, any polynomial equation can be evaluated on a matrix.
Example Let f(x) = x − 2x2 + 3x3. 1 t
Let M = 0 1 . Then:
2 1 2t 3 1 3t
M=01,M=01,... 73
Hence:
1t 12t 13t f(M) = 0 1 −2 0 1 +3 0 1
2 6t =02
Suppose f(x) is any function defined by a convergent Taylor Series: f(x)=f(0)+f′(0)x+ 1f′′(0)x2 +···
2!
Then we can define the matrix function by just plugging in M:
f(M)=f(0)+f′(0)M+ 1f′′(0)M2 +··· 2!
There are additional techniques to determine the convergence of Taylor Series
of matrices, based on the fact that the convergence problem is simple for
diagonal matrices. It also turns out that exp(M) = 1+M+1M2+ 1 M3+··· 2 3!
always converges.
Matrix Exponential Example
Matrix multiplication does not commute. For generic n × n square matrices M and N, then MN ̸= NM. For example:
On the other hand:
1 11 0 2 1 0111=11
1 01 1 1 1 1101=12
Since n × n matrices are linear transformations Rn → Rn, we can see that the order of successive linear transformations matters. For two linear transformations K and L taking Rn → Rn, and v ∈ Rn, then in general
K(L(v)) ̸= L(K(v)) .
Finding matrices such that MN = NM is an important problem in mathematics.
74
Here is an example of matrices acting on objects in three dimensions that also shows matrices not commuting.
Example You learned in a Review Problem that the matrix cosθ sinθ
rotates vectors in the plane by an angle θ. We can generalize this, using block matrices, to three dimensions. In fact the following matrices built from a 2 × 2 rotation matrix, a 1 × 1 identity matrix and zeroes everywhere else
cosθsinθ0
M = −sinθ cosθ 0 and
0 01
10 0 N = 0 cosθ sinθ ,
0−sinθcosθ
M= −sinθ cosθ ,
perform rotations by an angle θ in the xy and yz planes, respectively. Because, they rotate single vectors, you can also use them to rotate objects built from a collection of vectors like pretty colored blocks! Here is a picture of M and then N acting on such a block, compared with the case of N followed by M. The special case of θ = 90◦ is shown.
Notice how the end product of MN and NM are different, so MN ̸= NM here. Trace
Matrices contain a great deal of information, so finding ways to extract es- sential information is useful. Here we need to assume that n < ∞ otherwise there are subtleties with convergence that we’d have to address.
75
Definition The trace of a square matrix M = (mij ) is the sum of its diagonal
entries.
Example
n
tr M = mi .
i=1
2 7 6
tr9 5 1=2+5+8=15
438
While matrix multiplication does not commute, the trace of a product of matrices does not depend on the order of multiplication:
tr(MN) =
= MliNil
il
= NilMli li
= tr( NilMli) i
= tr(N M ). Explanation of this Proof
Thus we have a Theorem:
Theorem 9.1.
tr(MN) = tr(NM) for any square matrices M and N.
Example Continuing from the previous example,
1 1 1 0
tr(MliNjl) l
so
M=01,N=11. 2 1 1 1
MN= 1 1 ̸=NM= 1 2 . However, tr(MN) = 2 + 1 = 3 = 1 + 2 = tr(NM).
76
Another useful property of the trace is that: trM = trMT
This is true because the trace only uses the diagonal entries, which are fixed 1 1 1 2 1 2T
bythetranspose. Forexample: tr 2 3 =4=tr 1 3 =tr 1 3
Finally, trace is a linear transformation from matrices to the real numbers. This is easy to check.
More on the trace function
Linear Systems Redux Recall that we can view a linear system as a matrix equation
MX = V,
with M an r×k matrix of coefficients, X a k×1 matrix of unknowns, and V an r × 1 matrix of constants. If M is a square matrix, then the number of equations r is the same as the number of unknowns k, so we have hope of finding a single solution.
Above we discussed functions of matrices. An extremely useful function would be f(M) = M1 , where M M1 = I. If we could compute M1 , then we would multiply both sides of the equation M X = V by M1 to obtain the solution immediately: X = M1 V .
Clearly, if the linear system has no solution, then there can be no hope of finding M1 , since if it existed we could find a solution. On the other hand, if the system has
more than one solution, it also seems unlikely that M1 would exist, since X = M1 V yields only a single solution.
Therefore M1 only sometimes exists. It is called the inverse of M, and is usually written M−1.
References
Beezer: Part T, Section T Wikipedia:
Trace (Linear Algebra) Block Matrix
77
Review Problems
1 2 0 T T
1. LetA= 3 −1 4 . FindAA andA A. Whatcanyousayabout
matrices M M T and M T M in general? Explain. 2. Compute exp(A) for the following matrices:
λ 0 A=0λ
1 λ A=01
0 λ A=00
Hint
a b 3. Supposead−bc̸=0,andletM= c d .
(a) Find a matrix M−1 such that MM−1 = I.
(b) Explain why your result explains what you found in a previous
homework exercise. (c) Compute M−1M.
1 0 0 0 0 0 0 1
0 1 0 0 0 0 1 0
0 0 1 0 0 1 0 0
0 0 0 1 1 0 0 0
4.LetM=0 0 0 0 2 1 0 0. DivideMintonamedblocks,
0 0 0 0 0 2 0 0 0 0 0 0 0 0 3 1 00000003
and then multiply blocks to compute M2.
78
5. A matrix A is called anti-symmetric (or skew-symmetric) if AT = −A. Show that for every n×n matrix M, we can write M = A+S where A is an anti-symmetric matrix and S is a symmetric matrix.
Hint: What kind of matrix is M +MT? How about M −MT? 10 Inverse Matrix
Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M−1 such that
M−1M = I = M−1M.
Inverse of a 2 × 2 Matrix Let M and N be the matrices: a b d −b
M=cd, N=−ca Multiplying these matrices gives:
ad−bc 0
MN = 0 ad − bc = (ad − bc)I
−1 1d−b
ThenM =ad−bc −c a ,solongasad−bc̸=0.
79
10.1 Three Properties of the Inverse
1. If A is a square matrix and B is the inverse of A, then A is the inverse of B, since AB = I = BA. Then we have the identity:
(A−1)−1 = A
2. Notice that B−1A−1AB = B−1IB = I = ABB−1A−1. Then: (AB)−1 = B−1A−1
Then much like the transpose, taking the inverse of a product reverses the order of the product.
3. Finally, recall that (AB)T = BT AT . Since IT = I, then (A−1A)T = AT (A−1)T = I. Similarly, (AA−1)T = (A−1)T AT = I. Then:
(A−1)T =(AT)−1 80
As such, we could even write A−T for the inverse of the transpose of A (or equivalently the transpose of the inverse).
Example
10.2 Finding Inverses
Suppose M is a square matrix and MX = V is a linear system with unique solution X0. Since there is a unique solution, M−1V , then the reduced row echelon form of the linear system has an identity matrix on the left:
M V∼I M−1V
Solving the linear system MX = V then tells us what M−1V is.
To solve many linear systems at once, we can consider augmented matrices with a matrix on the right side instead of a column vector, and then apply Gaussian row reduction to the left side of the matrix. Once the identity matrix is on the left side of the augmented matrix, then the solution of each
of the individual linear systems is on the right.
To compute M−1, we would like M−1, rather than M−1V to appear on
the right side of our augmented matrix. This is achieved by solving the collection of systems MX = ek, where ek is the column vector of zeroes with a 1 in the kth entry. I.e., the n×n identity matrix can be viewed as a bunch of column vectors In = (e1 e2 · · · en ). So, putting the ek ’s together into an identity matrix, we get:
M I∼I M−1I=I M−1 −1 2 −3−1
Example Find 2 1 0 . Start by writing the augmented matrix, then 4 −2 5
apply row reduction to the left side.
81
−1 2 −3100
2 1 0 0 1 0 ∼
4 −2 5 0 0 1
∼
∼
1 0
0 1
0 0
−2 3 100 5 −6 2 1 0 6 −7 4 0 1
0 3 −1 2 0 545
1 −6 2 1 0 555
0 1 4 −6 1 555
At this point, we know M−1 assuming we didn’t goof up. However, row reduction is a lengthy and arithmetically involved process, so we should check our answer, by confirming that MM−1 = I (or if you prefer M−1M = I):
−1 2 −3−5 4 −3 100 MM−1=2 1 010 −7 6=0 1 0
4−25 8−65 001
The product of the two matrices is indeed the identity matrix, so we’re done.
Reading homework: problem 10.1
10.3 Linear Systems and Inverses
If M−1 exists and is known, then we can immediately solve linear systems associated to M.
Example Consider the linear system:
−x +2y −3z = 1
2x + y = 2
4x −2y +5z = 0 1
The associated matrix equation is MX = 2, where M is the same as in the 0
100−5 4 −3 0 1 0 10 −7 6
0 0 1 8 −6 5
previous section. Then:
82
x −1 2 −3−11 −5 4 −31 3 y= 2 1 0 2=10 −7 6 2=−4 z 4 −2 5 0 8 −6 5 0 −4
x 3
Then y = −4. In summary, when M−1 exists, then
z −4
MX = V ⇒ X = M−1V .
Reading homework: problem 10.2
10.4 Homogeneous Systems
Theorem 10.1. A square matrix M is invertible if and only if the homoge-
neous system
has no non-zero solutions.
MX = 0
Proof. First, suppose that M−1 exists. Then MX = 0 ⇒ X = M−10 = 0. Thus, if M is invertible, then MX = 0 has no non-zero solutions.
On the other hand, MX = 0 always has the solution X = 0. If no other solutions exist, then M can be put into reduced row echelon form with every variable a pivot. In this case, M−1 can be computed using the process in the previous section.
A great test of your linear algebra knowledge is to make a list of conditions for a matrix to be singular. You will learn more of these as the course goes by, but can also skip straight to the list in Section 24.1.
83
10.5 Bit Matrices
In computer science, information is recorded using binary strings of data. For example, the following string contains an English word:
011011000110100101101110011001010110000101110010
A bit is the basic unit of information, keeping track of a single one or zero. Computers can add and multiply individual bits very quickly.
Consider the set Z2 = {0, 1} with addition and multiplication given by the following tables:
+01 ·01 001 000 110 101
Notice that −1 = 1, since 1 + 1 = 0.
It turns out that Z2 is almost as good as the real or complex numbers
(they are all fields), so we can apply all of the linear algebra we have learned thus far to matrices with Z2 entries. A matrix with entries in Z2 is sometimes called a bit matrix3.
1 0 1
Example 0 1 1 is an invertible matrix over Z2:
111
1 0 1−1 0 1 1 011 =101 111 111
This can be easily verified by multiplying:
101011 100 0 1 11 0 1=0 1 0 111 111 001
Application: Cryptography A very simple way to hide information is to use a sub- stitution cipher, in which the alphabet is permuted and each letter in a message is systematically exchanged for another. For example, the ROT-13 cypher just exchanges a letter with the letter thirteen places before or after it in the alphabet. For example, HELLO becomes URYYB. Applying the algorithm again decodes the message, turning
3Note that bits in a bit arithmetic shorthand do not “add” and “multiply” as elements in Z2 does since these operators corresponding to “bitwise or” and “bitwise and” respectively.
84
URYYB back into HELLO. Substitution ciphers are easy to break, but the basic idea can be extended to create cryptographic systems that are practically uncrackable. For example, a one-time pad is a system that uses a different substitution for each letter in the message. So long as a particular set of substitutions is not used on more than one message, the one-time pad is unbreakable.
English characters are often stored in computers in the ASCII format. In ASCII, a single character is represented by a string of eight bits, which we can consider as a vector in Z82 (which is like vectors in R8, where the entries are zeros and ones). One way to create a substitution cipher, then, is to choose an 8 × 8 invertible bit matrix M, and multiply each letter of the message by M. Then to decode the message, each string of eight characters would be multiplied by M−1.
To make the message a bit tougher to decode, one could consider pairs (or longer sequences) of letters as a single vector in Z16 (or a higher-dimensional space), and
then use an appropriately-sized invertible matrix.
For more on cryptography, see “The Code Book,” by Simon Singh (1999, Double-
day).
References
Hefferon: Chapter Three, Section IV.2 Beezer: Chapter M, Section MISLE Wikipedia: Invertible Matrix
Review Problems
1. Find formulas for the inverses of the following matrices, when they are not singular:
85
2
You are now ready to attempt the first sample midterm.
1 a b
(a) 0 1 c
001
a b c
(b) 0 d e
00f
When are these matrices singular?
2. Write down all 2×2 bit matrices and decide which of them are singular. For those which are not singular, pair them with their inverse.
3. Let M be a square matrix. Explain why the following statements are equivalent:
(a) M X = V has a unique solution for every column vector V . (b) M is non-singular.
(In general for problems like this, think about the key words:
First, suppose that there is some column vector V such that the equa- tion MX = V has two distinct solutions. Show that M must be sin- gular; that is, show that M can have no inverse.
Next, suppose that there is some column vector V such that the equa- tion MX = V has no solutions. Show that M must be singular.
Finally, suppose that M is non-singular. Show that no matter what the column vector V is, there is a unique solution to MX = V.)
Hints for Problem 3
4. Left and Right Inverses: So far we have only talked about inverses of square matrices. This problem will explore the notion of a left and right inverse for a matrix that is not square. Let
(a) Compute:
0 1 1 A=110
86
i. AAT,
ii. AAT−1,
iii. B := AT AAT −1
(b) Show that the matrix B above is a right inverse for A, i.e., verify
that
(c) Does BA make sense? (Why not?)
(d) Let A be an n×m matrix with n > m. Suggest a formula for a left inverse C such that
CA = I
Hint: you may assume that AT A has an inverse.
(e) Test your proposal for a left inverse for the simple example 1
A=2,
(f) True or false: Left and right inverses are unique. If false give a
counterexample.
Left and Right Inverses
AB = I .
87
11 LU Decomposition
Certain matrices are easier to work with than others. In this section, we will see how to write any square4 matrix M as the product of two simpler matrices. We will write
M = LU ,
where:
L is lower triangular. This means that all entries above the main
diagonal are zero. In notation, L = (lji ) with lji = 0 for all j > i. l1 0 0 ···
l2 l2 0 ··· 12
L=l3 l3 l3 ··· 123
. . . …
U is upper triangular. This means that all entries below the main
diagonal are zero. In notation, U = (uij ) with uij = 0 for all j < i. u1 u12 u13 · · ·
0 u2 u2 ··· 23
U=0 0 u3 ··· 3
. . . ...
M = LU is called an LU decomposition of M.
This is a useful trick for many computational reasons. It is much easier
to compute the inverse of an upper or lower triangular matrix. Since inverses are useful for solving linear systems, this makes solving any linear system associated to the matrix much faster as well. The determinant—a very im- portant quantity associated with any square matrix—is very easy to compute for triangular matrices.
Example Linear systems associated to upper triangular matrices are very easy to solve by back substitution.
ab1 e 1be 0ce⇒y=c,x=a1−c
4The case where M is not square is dealt with at the end of the lecture. 88
1 0 0 d
a 1 0 e⇒x=d, y=e−ad, z=f−bd−c(e−ad)
bc1f
For lower triangular matrices, back substitution gives a quick solution; for upper tri-
angular matrices, forward substitution gives the solution.
11.1 Using LU Decomposition to Solve Linear Systems Suppose we have M = LU and want to solve the system
MX = LUX = V. u
Step 1: Set W = v = UX. w
Step 2: Solve the system LW = V . This should be simple by forward substitution since L is lower triangular. Suppose the solution to LW = V is W0.
Step 3: Now solve the system UX = W0. This should be easy by backward substitution, since U is upper triangular. The solution to this system is the solution to the original system.
We can think of this as using the matrix L to perform row operations on the matrix U in order to solve the system; this idea also appears in the study of determinants.
Reading homework: problem 11.1
Example Consider the linear system:
6x+18y+3z= 3 2x+12y+ z=19 4x+15y+3z= 0
An LU decomposition for the associated matrix M is:
6183 300261 2 12 1=1 6 00 1 0.
4153 231001
89
u
Step 1: Set W = v = UX.
w
Step2: SolvethesystemLW =V:
3 0 0u 3
1 6 0v = 19
231w0 Bysubstitution,wegetu=1,v=3,andw=−11. Then
1 W0 = 3
−11 Step 3: Solve the system UX = W0.
2 6 1x 1 0 1 0y= 3
001z −11 Back substitution gives z = −11, y = 3, and x = −3.
−3
ThenX= 3 ,andwe’redone.
−11
Using a LU decomposition
11.2 Finding an LU Decomposition.
For any given matrix, there are actually many different LU decompositions. However, there is a unique LU decomposition in which the L matrix has ones on the diagonal; then L is called a lower unit triangular matrix.
To find the LU decomposition, we’ll create two sequences of matrices L0,L1,... and U0,U1,... such that at each step, LiUi = M. Each of the Li will be lower triangular, but only the last Ui will be upper triangular.
Start by setting L0 = I and U0 = M, because L0U0 = M. A main concept of this calculation is captured by the following example:
90
Example Consider
Lets compute EM
1 0 a b c ··· E=λ1, M=def···.
a b c··· EM= d+λa e+λb f+λc ··· ,.
Something neat happened here: multiplying M by E performed the row operation R2 → R2 + λR − 1 on M . Another interesting fact:
−1 10 E := −λ 1
obeys (check this yourself...)
Hence M = E−1EM or, writing this out
E−1E = 1. a b c ··· 1 0 a
b c ··· d e f ··· = −λ 1 d+λa e+λb f+λc ··· .
Here the matrix on the left is lower triangular, while the matrix on the right has had a row operation performed on it.
We would like to use the first row of U0 to zero out the first entry of every row below it. For our running example,
6 18 3 U0=M=2 12 1,
4 15 3 91
so we would like to perform the row operations R2 → R2 − 31R1 and R3 → R3 − 32R1. If we perform these row operations on U0 to produce
6 18 3 U1=0 6 0,
031
we need to multiply this on the left by a lower triangular matrix L1 so that the product L1U1 = M still. The above example shows how to do this: Set L1 to be the lower triangular matrix whose first column is filled with the minus constants used to zero out the first column of M. Then
2 3
By construction L1U1 = M, but you should compute this yourself as a double check.
Now repeat the process by zeroing the second column of U1 below the diagonal using the second row of U1 using the row operation R3 → R3 − 21 R2 to produce
6 18 3 U2=0 6 0.
001
The matrix that undoes this row operation is obtained in the same way we
found L1 above and is:
1 0 0 0 1 0.
1 0 0 1
L 1 = 3 1 0 .
0 21 0
Thus our answer for L2 is the product of this matrix with L1, namely
01
100100 100 11
L 2 = 3 1 0 0 1 0 = 3 1 0 . 23 0 1 0 12 0 23 21 1
Notice that it is lower triangular because
THE PRODUCT OF LOWER TRIANGULAR MATRICES IS ALWAYS LOWER TRIANGULAR!
92
Moreover it is obtained by recording minus the constants used for all our row operations in the appropriate columns (this always works this way). Moreover, U2 is upper triangular and M = L2U2, we are done! Putting this all together we have
LU =
=
3 1 21 32
1
6183 1006183 1
M=2 12 1=3 1 00 6 0. 4153 23211001
If the matrix you’re working with has more than three rows, just continue this process by zeroing out the next column below the diagonal, and repeat until there’s nothing left to do.
Another LU decomposition example
The fractions in the L matrix are admittedly ugly. For two matrices LU, we can multiply one entire column of L by a constant λ and divide the corresponding row of U by the same constant without changing the product of the two matrices. Then:
1 0 1
0 6 18 3 0I0 6 0
001 03 0 013
1 0 11
0 06 18 3 3 1 0 0 6 0 0 6 0 0 6 0
21 32
1 001 001 001
3 0 02 6 1 1 6 00 1 0.
=
The resulting matrix looks nicer, but isn’t in standard form.
Reading homework: problem 11.2
93
231 001
For matrices that are not square, LU decomposition still makes sense. Given an m × n matrix M, for example we could write M = LU with L a square lower unit triangular matrix, and U a rectangular matrix. Then L will be an m×m matrix, and U will be an m×n matrix (of the same shape as M). From here, the process is exactly the same as for a square matrix. We create a sequence of matrices Li and Ui that is eventually the LU decomposition. Again, we start with L0 = I and U0 = M.
−2 1 3
Example Let’s find the LU decomposition of M = U0 = −4 4 1 . Since M is
a 2 × 3 matrix, our decomposition will consist of a 2 × 2 matrix and a 2 × 3 matrix. 1 0
ThenwestartwithL0 =I2 = 0 1 .
The next step is to zero-out the first column of M below the diagonal. There is only one row to cancel, then, and it can be removed by subtracting 2 times the first row of M to the second row of M. Then:
1 0 −2 1 3 L1=21, U1=02−5
Since U1 is upper triangular, we’re done. With a larger matrix, we would just continue the process.
11.3 Block LDU Decomposition
Let M be a square block matrix with square blocks X, Y, Z, W such that X−1 exists. Then M can be decomposed as a block LDU decomposition, where D is block diagonal, as follows:
X Y M=ZW
Then:
This can be checked explicitly simply by block-multiplying these three ma- trices.
Block LDU Explanation 94
I 0X 0 IX−1Y M= ZX−1 I 0 W−ZX−1Y 0 I .
Example For a 2 × 2 matrix, we can regard each entry as a block. 1 2 1 01 01 2
3 4 = 3 1 0 −2 0 1
By multiplying the diagonal matrix by the upper triangular matrix, we get the standard
LU decomposition of the matrix.
References
Wikipedia:
LU Decomposition
Block LU Decomposition
Review Problems
1. Consider the linear system:
i. Find x1.
ii. Find x2.
iii. Find x3.
x1
l12x1 +x2
= v1 =v2
.
l1nx1 +l2nx2 +···+xn =vn
k. Try to find a formula for xk. Don’t worry about simplifying your answer.
X Y
2. Let M = Z W be a square n×n block matrix with W invertible.
i. IfW hasrrows,whatsizeareX,Y,andZ?
ii. Find a UDL decomposition for M. In other words, fill in the stars
in the following equation:
X Y I ∗∗ 0I 0
ZW=0I0∗∗I 95
.
12 Elementary Matrices and Determinants
Given a square matrix, is there an easy way to know when it is invertible? Answering this fundamental question is our next goal.
For small cases, we already know the answer. If M is a 1 × 1 matrix, then M = (m) ⇒ M−1 = (1/m). Then M is invertible if and only if m ̸= 0.
m 1 1 m 12 ForMa2×2matrix,weshowedinSection10thatifM= m2 m2 ,
then M
− 1
12
1 m 2 2 − m 12
= m1m2−m12m21 −m2 m1 . Thus M is invertible if and only if
11
m 1 1 m 2 2 − m 12 m 21 ̸ = 0 .
For 2 × 2 matrices, this quantity is called the determinant of M .
m1 m12 detM =det m2 m2
12
1 2 1 2 =m1m2 −m2m1
m 12 m 13
Example For a 3 × 3 matrix, M = m1 m2 m3, then (by the first review
m31 m32 m3 question) M is non-singular if and only if:
det M = m1m2m3 − m1m23m32 + m12m23m31 − m12m21m3 + m13m21m32 − m13m2m31 ̸= 0. Notice that in the subscripts, each ordering of the numbers 1, 2, and 3 occurs exactly
once. Each of these is a permutation of the set {1, 2, 3}.
96
m 1 1 222
12.1 Permutations
Consider n objects labeled 1 through n and shuffle them. Each possible shuffle is called a permutation σ. For example, here is an example of a permutation of 5:
1 2 3 4 5 ρ=42513
We can consider a permutation σ as a function from the set of numbers [n] := {1,2,...,n} to [n], and write ρ(3) = 5 from the above example. In general we can write
12345 σ(1) σ(2) σ(3) σ(4) σ(5),
but since the top line of any permutation is always the same, we can omit the top line and just write:
σ = σ(1) σ(2) σ(3) σ(4) σ(5)
so we can just write ρ = [4, 2, 5, 1, 3]. There is also one more notation called cycle notation, but we do not discuss it here.
The mathematics of permutations is extensive and interesting; there are a few properties of permutations that we’ll need.
There are n! permutations of n distinct objects, since there are n choices for the first object, n − 1 choices for the second once the first has been chosen, and so on.
Every permutation can be built up by successively swapping pairs of objects. For example, to build up the permutation 3 1 2 from the trivial permutation 1 2 3, you can first swap 2 and 3, and then swap 1 and 3.
For any given permutation σ, there is some number of swaps it takes to build up the permutation. (It’s simplest to use the minimum number of swaps, but you don’t have to: it turns out that any way of building up the permutation from swaps will have have the same parity of swaps, either even or odd.) If this number happens to be even, then σ is called an even permutation; if this number is odd, then σ is an odd permutation. In fact, n! is even for all n ≥ 2, and exactly half of the
97
permutations are even and the other half are odd. It’s worth noting that the trivial permutation (which sends i → i for every i) is an even permutation, since it uses zero swaps.
Definition The sign function is a function sgn(σ) that sends permutations to the set {−1, 1}, defined by:
1 if σ is even; sgn(σ) = −1 if σ is odd.
For more on the swaps (also known as inversions) and the sign function, see Problem 4.
Permutation Example
Reading homework: problem 12.1
We can use permutations to give a definition of the determinant. Definition For an n×n matrix M, the determinant of M (sometimes writ-
ten |M|) is given by:
det M = sgn(σ) m1σ(1)m2σ(2) · · · mnσ(n).
σ
The sum is over all permutations of n. Each summand is a product of a single entry from each row, but with the column numbers shuffled by the permutation σ.
The last statement about the summands yields a nice property of the determinant:
Theorem 12.1. If M has a row consisting entirely of zeros, then miσ(i) = 0 for every σ. Then det M = 0.
Example Because there are many permutations of n, writing the determinant this way for a general matrix gives a very long sum. For n = 4, there are 24 = 4! permutations, and for n = 5, there are already 120 = 5! permutations.
98
m1
m2 Fora4×4matrix,M= 1
m12 m13 m2 m2
m 14 m2
det M =
detM′
= sgn(σ)m1σ(1)···mjσ(i)···miσ(j)···mnσ(n) σ
= sgn(σ)m1σ(1)···miσ(j)···mjσ(i)···mnσ(n) σ
= (−sgn(σˆ))m1 ···mi ···mj ···mn
σˆ (1) σˆ (j ) σˆ (i) σˆ (n)
σ
= −sgn(σˆ)m1 ···mi ···mj ···mn
σˆ
= −detM.
m31 m41
2 3 m32 m3
4, then det M is: m 34
m42 m43
m4
m1m2m3m4 − m1m23m32m4 − m1m2m34m43
− m12m21m3m4 + m1m23m34m42 + m1m24m32m43
+ m12m23m31m4 + m12m21m34m43 ± 16 more terms.
This is very cumbersome.
Luckily, it is very easy to compute the determinants of certain matrices.
For example, if M is diagonal, then Mji = 0 whenever i ̸= j. Then all summands of the determinant involving off-diagonal entries vanish, so:
det M = sgn(σ)m1σ(1)m2σ(2) · · · mnσ(n) = m1m2 · · · mn. σ
Thus, the determinant of a diagonal matrix is just the product of its diagonal entries.
Since the identity matrix is diagonal with all diagonal entries equal to one, we have:
det I = 1.
We would like to use the determinant to decide whether a matrix is invert- ible or not. Previously, we computed the inverse of a matrix by applying row operations. As such, it makes sense to ask what happens to the determinant when row operations are applied to a matrix.
Swapping rows Swapping rows i and j (with i < j) in a matrix changes the determi- nant. For a permutation σ, let σˆ be the permutation obtained by swapping positions i and j. The sign of σˆ is the opposite of the sign of σ. Let M be a matrix, and M′ be the same matrix, but with rows i and j swapped. Then the determinant of M′ is:
σˆ (1) σˆ (j ) σˆ (i) σˆ (n)
99
Thus we see that swapping rows changes the sign of the determinant. I.e. M′ =−detM.
Reading homework: problem 12.2
Applying this result to M = I (the identity matrix) yields detEji =−1,
where the matrix Eji is the identity matrix with rows i and j swapped. It is our first example of an elementary matrix and we will meet it again soon.
This implies another nice property of the determinant. If two rows of the matrix are identical, then swapping the rows changes the sign of the matrix, but leaves the matrix unchanged. Then we see the following:
Theorem 12.2. If M has two identical rows, then det M = 0.
12.2 Elementary Matrices
Our next goal is to find matrices that emulate the Gaussian row operations on a matrix. In other words, for any matrix M, and a matrix M′ equal to M after a row operation, we wish to find a matrix R such that M′ = RM.
See Some Ideas Explained
We will first find a matrix that, when it multiplies a matrix M, rows i and j of M are swapped.
100
Let R1 through Rn denote the rows of M, and let M′ be the matrix M with rows i and j swapped. Then M and M′ can be regarded as a block
matrices:
Then notice that:
The matrix
R 0 M′ = . =
1 R
. . ij
R R M = . , and M′ = . .
Rj Ri . .
.. 1
.
. ..
.. .
ji
...
. ..
Ri 1 0 Rj
. . . . 1
.. .
. . 1
0 1
... =: Eji
1 0 .
.. 1
is just the identity matrix with rows i and j swapped. This matrix Eji is called an elementary matrix. Then, symbolically,
M ′ = E ji M .
Because detI = 1 and swapping a pair of rows changes the sign of the
determinant, we have found that
detEji =−1.
101
.
Moreover, since swapping a pair of rows flips the sign of the determinant, detEji = −1 and detEjiM is the matrix M with rows i and j swapped we have that
detEjiM=detEji detM.
This result hints at an even better one for determinants of products of
determinants. Stare at it again before reading the next Lecture:
References
Hefferon, Chapter Four, Section I.1 and I.3 Beezer, Chapter D, Section DM, Subsection EM Beezer, Chapter D, Section PDM
Wikipedia:
Determinant
Permutation
Elementary Matrix
102
Review Problems
m 1 1 m 12 m 13
1. Let M = m21 m2 m23. Use row operations to put M into row m31 m32 m3
echelon form. For simplicity, assume that m1 ̸= 0 ̸= m1m2 − m21m12. Prove that M is non-singular if and only if:
m1m2m3 − m1m23m32 + m12m23m31 − m12m21m3 + m13m21m32 − m13m2m31 ̸= 0
2. (a) (b)
(c)
1 0 1 a b WhatdoesthematrixE2 = 1 0 dotoM = d c under
left multiplication? What about right multiplication?
Find elementary matrices R1(λ) and R2(λ) that respectively mul- tiply rows 1 and 2 of M by λ but otherwise leave M the same under left multiplication.
Find a matrix S21(λ) that adds a multiple λ of row 2 to row 1 under left multiplication.
3. Let M be a matrix and SjiM the same matrix with rows i and j switched. Explain every line of the series of equations proving that detM = −det(SjiM).
4. This problem is a “hands-on” look at why the property describing the parity of permutations is true.
The inversion number of a permutation σ is the number of pairs i < j such that σ(i) > σ(j); it’s the number of “numbers that appear left of smaller numbers” in the permutation. For example, for the permutation ρ = [4, 2, 3, 1], the inversion number is 5. The number 4 comes before 2, 3, and 1, and 2 and 3 both come before 1.
Given a permutation σ, we can make a new permutation τi,jσ by ex- changing the ith and jth entries of σ.
(a) What is the inversion number of the permutation μ = [1,2,4,3] that exchanges 4 and 3 and leaves everything else alone? Is it an even or an odd permutation?
103
(b) What is the inversion number of the permutation ρ = [4,2,3,1] that exchanges 1 and 4 and leaves everything else alone? Is it an even or an odd permutation?
(c) What is the inversion number of the permutation τ1,3μ? Compare the parity5 of μ to the parity of τ1,3μ.
(d) What is the inversion number of the permutation τ2,4ρ? Compare the parity of ρ to the parity of τ2,4ρ.
(e) What is the inversion number of the permutation τ3,4ρ? Compare the parity of ρ to the parity of τ3,4ρ.
Problem 4 hints
5. (Extra credit) Here we will examine a (very) small set of the general properties about permutations and their applications. In particular, we will show that one way to compute the sign of a permutation is by finding the inversion number N of σ and we have
sgn(σ) = (−1)N . For this problem, let μ = [1, 2, 4, 3].
(a) Show that every permutation σ can be sorted by only taking sim- ple (adjacent) transpositions si where si interchanges the numbers in position i and i + 1 of a permutation σ (in our other notation si = τi,i+1). For example s2μ = [1, 4, 2, 3], and to sort μ we have s3μ = [1,2,3,4].
(b) We can compose simple transpositions together to represent a per- mutation (note that the sequence of compositions is not unique), and these are associative, we have an identity (the trivial permu- tation where the list is in order or we do nothing on our list), and we have an inverse since it is clear that sisiσ = σ. Thus permuta- tions of [n] under composition are an example of a group. However
5The parity of an integer refers to whether the integer is even or odd. Here the parity of a permutation μ refers to the parity of its inversion number.
104
note that not all simple transpositions commute with each other since
s1s2[1,2,3] = s1[1,3,2] = [3,1,2] s2s1[1,2,3] = s2[2,1,3] = [2,3,1]
(you will prove here when simple transpositions commute). When we consider our initial permutation to be the trivial permutation e = [1,2,…,n], we do not write it; for example si ≡ sie and μ = s3 ≡ s3e. This is analogous to not writing 1 when multiplying. Show that sisi = e (in shorthand s2i = e), si+1sisi+1 = sisi+1si for alli,andsi andsj commuteforall|i−j|≥2.
(c) Show that every way of expressing σ can be obtained from using the relations proved in part 5b. In other words, show that for any expression w of simple transpositions representing the trivial permutation e, using the proved relations.
Hint: Use induction on n. For the induction step, follow the path of the (n + 1)-th strand by looking at snsn−1 ···sksk±1 ···sn and argue why you can write this as a subexpression for any expression of e. Consider using diagrams of these paths to help.
(d) The simple transpositions acts on an n-dimensional vector space V by s v = Ei v (where Ei is an elementary matrix) for all
i i+1 j
vectors v ∈ V . Therefore we can just represent a permutation
σasthematrixM6,andwehavedet(M )=det(Ei )=−1. σ si i+1
Thus prove that det(Mσ) = (−1)N where N is a number of simple transpositions needed to represent σ as a permutation. You can assume that Msisj = MsiMsj (it is not hard to prove) and that det(AB) = det(A) det(B) from Chapter 13.
Hint: You to make sure det(Mσ) is well-defined since there are infinite ways to represent σ as simple transpositions.
(e) Show that si+1sisi+1 = τi,i+2, and so give one way of writing τi,j in terms of simple transpositions? Is τi,j an even or an odd per- mutation? What is det(Mτi,j )? What is the inversion number of τi,j ?
(f) The minimal number of simple transpositions needed to express σ is called the length of σ; for example the length of μ is 1 since
6Often people will just use σ for the matrix when the context is clear. 105
μ = s3. Show that the length of σ is equal to the inversion number of σ.
Hint: Find an procedure which gives you a new permutation σ′ where σ = siσ′ for some i and the inversion number for σ′ is 1 less than the inversion number for σ.
(g) Show that (−1)N = sgn(σ) = det(Mσ), where σ is a permuta- tion with N inversions. Note that this immediately implies that sgn(σρ) = sgn(σ) sgn(ρ) for any permutations σ and ρ.
106
13 Elementary Matrices and Determinants II
In Lecture 12, we saw the definition of the determinant and derived an ele-
mentary matrix that exchanges two rows of a matrix. Next, we need to find
elementary matrices corresponding to the other two row operations; multi-
plying a row by a scalar, and adding a multiple of one row to another. As a
consequence, we will derive some important properties of the determinant.
R1
. i i
Consider M = . , where R are row vectors. Let R (λ) be the Rn
identity matrix, with the ith diagonal entry replaced by λ, not to be confused with the row vectors. I.e.
1
… i
R (λ) = λ
. ..
Then:
What effect does multiplication by Ri(λ) have on the determinant? det M′ = sgn(σ)m1σ(1) · · · λmiσ(i) · · · mnσ(n)
σ
= λsgn(σ)m1σ(1)···miσ(i)···mnσ(n) σ
= λdetM
Thus, multiplying a row by λ multiplies the determinant by λ. I.e.,
detRi(λ)M = λdetM . 107
. ′ii
M =R(λ)M =λR .
.
1
R1 .
. . Rn
Since Ri(λ) is just the identity matrix with a single row multiplied by λ, then by the above rule, the determinant of Ri(λ) is λ. Thus:
1
… i
det R (λ) = det λ
The final row operation is adding λRj to Ri. This is done with the
matrix Sji(λ), which is an identity matrix but with a λ in the i,j position. 1
.. .
1 λ S ji ( λ ) = . . .
1 .
.. 1
Then multiplying Sji(λ) by M gives the following:
108
= λ ..
.
1
detM′
= sgn(σ)m1σ(1) ···(miσ(i) +λmjσ(j))···mnσ(n) σ
= sgn(σ)m1σ(1)···miσ(i)···mnσ(n) σ
+sgn(σ)m1σ(1) ···λmjσ(j) ···mjσ(j) ···mnσ(n) σ
1
..
..
.
1 λ
.. iij
= detM+λdetM′′
Since M′′ has two identical rows, its determinant is 0. Then
detSji(λ)M = detM . Notice that if M is the identity matrix, then we have
det Sji(λ) = det(Sji(λ)I) = det I = 1 .
We now have elementary matrices associated to each of the row opera- tions.
Eji = Ri(λ) = Sji(λ) =
I with rows i, j swapped; I with λ in position i, i; I with λ in position i, j;
det Eji = −1 det Ri(λ) = λ det Sji(λ) = 1
R R+λR
. = . 1 Rj Rj
…
...
. . . . . . 1
What is the effect of multiplying by Sji(λ) on the determinant? Let M′ = Sji(λ)M, and let M′′ be the matrix M but with Ri replaced by Rj.
Elementary Determinants
109
We have also proved the following theorem along the way:
Theorem 13.1. If E is any of the elementary matrices Eji,Ri(λ),Sji(λ),
then det(EM)=detEdetM.
Reading homework: problem 13.1
We have seen that any matrix M can be put into reduced row echelon form via a sequence of row operations, and we have seen that any row op- eration can be emulated with left matrix multiplication by an elementary matrix. Suppose that RREF(M) is the reduced row echelon form of M. Then RREF(M) = E1E2 ···EkM where each Ei is an elementary matrix.
What is the determinant of a square matrix in reduced row echelon form? 110
If M is not invertible, then some row of RREF(M) contains only zeros. Then we can multiply the zero row by any constant λ without chang- ing M; by our previous observation, this scales the determinant of M by λ. Thus, if M is not invertible, detRREF(M) = λdetRREF(M), and so detRREF(M) = 0.
Otherwise, every row of RREF(M) has a pivot on the diagonal; since M is square, this means that RREF(M) is the identity matrix. Then if M is invertible, det RREF(M ) = 1.
Additionally, notice that detRREF(M) = det(E1E2 ···EkM). Then by the theorem above, det RREF(M) = det(E1) · · · det(Ek) det M. Since each Ei has non-zero determinant, then det RREF(M ) = 0 if and only if det M = 0.
Then we have shown:
Theorem 13.2. For any square matrix M, detM ̸= 0 if and only if M is
invertible.
Since we know the determinants of the elementary matrices, we can im- mediately obtain the following:
Determinants and Inverses
Corollary 13.3. Any elementary matrix Eji,Ri(λ),Sji(λ) is invertible, except for Ri(0). In fact, the inverse of an elementary matrix is another elementary matrix.
To obtain one last important result, suppose that M and N are square n × n matrices, with reduced row echelon forms such that, for elementary matrices Ei and Fi,
M=E1E2···Ek RREF(M), 111
and
If RREF(M) is the identity matrix (i.e., M is invertible), then:
N=F1F2···Fl RREF(N).
det(E1E2 · · · Ek RREF(M)F1F2 · · · Fl RREF(N))
= det(E1) · · · det(Ek) det(I) det(F1) · · · det(Fl) det(RREF(N)
det(MN) =
= det(E1E2 · · · EkIF1F2 · · · Fl RREF(N))
= det(M ) det(N )
Otherwise, M is not invertible, and det M = 0 = det RREF(M ). Then there exists a row of zeros in RREF(M), so Rn(λ)RREF(M) = RREF(M). Then:
det(E1E2 · · · Ek RREF(M)N)
det(M N ) =
= det(E1E2 · · · Ek RREF(M)N)
= det(E1 ) · · · det(Ek ) det(RREF(M )N )
= det(E1)···det(Ek)det(Rn(λ) RREF(M)N)
= det(E1 ) · · · det(Ek )λ det(RREF(M )N )
= λdet(MN)
Which implies that det(MN) = 0 = det M det N.
Thus we have shown that for any matrices M and N,
det(MN) = det M det N This result is extremely important; do not forget it!
Alternative proof
Reading homework: problem 13.2
112
References
Hefferon, Chapter Four, Section I.1 and I.3 Beezer, Chapter D, Section DM, Subsection EM Beezer, Chapter D, Section PDM
Wikipedia:
Determinant
Elementary Matrix
Review Problems
a b
1.LetM= c d andN= z w .Computethefollowing:
(a) detM.
(b) detN.
(c) det(MN).
(d) detM detN.
(e) det(M−1) assuming ad − bc ̸= 0.
(f) det(MT)
(g) det(M + N ) − (det M + det N ). Is the determinant a linear trans- formation from square matrices to real numbers? Explain.
113
x y
a b
2. Suppose M = c d is invertible. Write M as a product of elemen-
tary row matrices times RREF(M).
3. Find the inverses of each of the elementary matrices, Eji, Ri(λ), Sji(λ). Make sure to show that the elementary matrix times its inverse is ac- tually the identity.
4. (Extra Credit) Let eij denote the matrix with a 1 in the i-th row and j-th column and 0’s everywhere else, and let A be an arbitrary 2×2 matrix. Compute det(A + tI2), and what is first order term (the coefficient of t)? Can you express your results in terms of tr(A)? What about the first order term in det(A + tIn) for any arbitrary n × n matrix A in terms of tr(A)?
We note that the result of det(A+tI2) is a polynomial in the variable t and by taking t = −λ is what is known as the characteristic polynomial from Chapter 18.
5. (Extra Credit: (Directional) Derivative of the Determinant) Notice that det: Mn → R where Mn is the vector space of all n × n matrices, and so we can take directional derivatives of det. Let A be an arbitrary n × n matrix, and for all i and j compute the following:
(a)
(b)
(c)
(d)
lim det(I2 + teij) − det(I2) t→0 t
lim det(I3 + teij) − det(I3) t→0 t
lim det(In + teij) − det(In) t→0 t
lim det(In + At) − det(In)
t→0
t
114
(Recall that what you are calculating is the directional derivative in the eij and A directions.) Can you express your results in terms of the trace function?
Hint: Use the results from Problem 4 and what you know about the derivatives of polynomials evaluated at 0 (i.e. what is p′(0)?).
115
14 Properties of the Determinant
In Lecture 13 we showed that the determinant of a matrix is non-zero if and only if that matrix is invertible. We also showed that the determinant is a multiplicative function, in the sense that det(M N ) = det M det N . Now we will devise some methods for calculating the determinant.
Recall that:
det M = sgn(σ)m1σ(1)m2σ(2) · · · mnσ(n). σ
A minor of an n × n matrix M is the determinant of any square matrix obtained from M by deleting rows and columns. In particular, any entry mij of a square matrix M is associated to a minor obtained by deleting the ith row and jth column of M.
It is possible to write the determinant of a matrix in terms of its minors as follows:
det M
= sgn(σ) m1σ(1)m2σ(2) · · · mnσ(n) σ
= m1 sgn(σˆ)m2
···mn
σˆ (n)
m3 ···mn
σˆ (3) σˆ (n)
m3 m4 ···mn ±··· σˆ (2) σˆ (4) σˆ (n)
1
σˆ (2)
σˆ
− m1 sgn(σˆ)m2
2
σˆ (1)
σˆ
+ m1 sgn(σˆ)m2
σˆ
3
σˆ (1)
Here the symbols σˆ refer to permutations of n−1 objects. What we’re doing here is collecting up all of the terms of the original sum that contain the first row entry m1j for each column number j. Each term in that collection is associated to a permutation sending 1 → j. The remainder of any such permutationmapstheset{2,…,n}→{1,…,j−1,j+1,…,n}. Wecall this partial permutation σˆ = σ(2) · · · σ(n).
The last issue is that the permutation σˆ may not have the same sign as σ. From previous homework, we know that a permutation has the same parity as its inversion number. Removing 1 → j from a permutation reduces the inversion number by the number of elements right of j that are less than j. Since j comes first in the permutation j σ(2) ··· σ(n), the inversion
116
number of σˆ is reduced by j − 1. Then the sign of σ differs from the sign of σˆ if σ sends 1 to an even number.
In other words, to expand by minors we pick an entry m1j of the first row, then add (−1)j−1 times the determinant of the matrix with row i and column j deleted.
1 2 3
Example Let’s compute the determinant of M = 4 5 6 using expansion by
minors.
789
5 6 4 6 4 5 detM = 1det 8 9 −2det 7 9 +3det 7 8
= 1(5·9−8·6)−2(4·9−7·6)+3(4·8−7·5) =0
Here, M −1 does not exist because7 det M = 0.
Example Sometimes the entries of a matrix allow us to simplify the calculation of the 1 2 3
determinant. Take N = 4 0 0. Notice that the second row has many zeros; 789
then we can switch the first and second rows of N to get:
1 2 3 4 0 0 det4 0 0 = −det1 2 3
789 789 2 3
= −4det 8 9 = 24
Example
7A fun exercise is to compute the determinant of a 4 × 4 matrix filled in order, from left to right, with the numbers 1,2,3,…16. What do you observe? Try the same for a 5×5 matrix with 1,2,3…25. Is there a pattern? Can you explain it?
117
Theorem 14.1. For any square matrix M, we have: detMT =detM
Proof. By definition,
det M = sgn(σ)m1σ(1)m2σ(2) · · · mnσ(n).
σ
For any permutation σ, there is a unique inverse permutation σ−1 that undoes σ. If σ sends i → j, then σ−1 sends j → i. In the two-line notation for a permutation, this corresponds to just flipping the permutation over. For
1 2 3 −1 example, if σ = 2 3 1 , then we can find σ
by flipping the permutation
and then putting the columns in order:
−1 2 3 1 1 2 3
σ=123=312
Since any permutation can be built up by transpositions, one can also find the inverse of a permutation σ by undoing each of the transpositions used to build up σ; this shows that one can use the same number of transpositions to build σ and σ−1. In particular, sgn σ = sgn σ−1.
Reading homework: problem 14.1
Then we can write out the above in formulas as follows:
det M
= sgn(σ)m1σ(1)m2σ(2) · · · mnσ(n) σ
= sgn(σ)mσ−1(1)mσ−1(2) · · · mσ−1(n) 12n
σ
= sgn(σ−1)mσ−1(1)mσ−1(2) · · · mσ−1(n)
12n
σ
= sgn(σ)mσ(1)mσ(2) · · · mσ(n)
σ
= detMT.
12n
The second-to-last equality is due to the existence of a unique inverse permu- tation: summing over permutations is the same as summing over all inverses of permutations. The final equality is by the definition of the transpose.
118
Example Because of this theorem, we see that expansion by minors also works over 1 2 3
columns. Let M = 0 5 6. Then 089
T
Let M and N be n × n matrices. We previously showed that det(MN) = detM detN, and detI = 1.
Then 1 = detI = det(MM−1) = detM detM−1. As such we have:
detM =detM
14.1 Determinant of the Inverse
Theorem 14.2.
detM−1 = 1 det M
5 8 =1det 6 9 =−3.
Just so you don’t forget this:
119
14.2 Adjoint of a Matrix
Recall that for the 2 × 2 matrix
Or in a more careful notation: if M =
− 1
m 1 1 m 12
m2 m2 , then
1
= m1m2 − m1m2 −m2 m1 ,
M
so long as detM = m1m2 − m2m1 ̸= 0. The matrix
12
m 2 2 − m 12
122111 12 12
m2−m12 −m2 m1 that
11
appears above is a special matrix, called the adjoint of M. Let’s define the adjoint for an n × n matrix.
A cofactor of M is obtained choosing any entry mij of M and then deleting the ith row and jth column of M, taking the determinant of the resulting matrix, and multiplying by(−1)i+j. This is written cofactor(mij).
Definition For M = (mij) a square matrix, The adjoint matrix adjM is given by:
Example
3 −1 −1
adj1 2 0=−det 0 1 1
−1 −1 1 1
2 0 det 1 1
1 0 −det 0 1
3 −1 det 0 1
3 −1 −det 1 0
1 2 T
adj M = (cofactor(mij ))T
−1 −1 det 2 0
det 0 1
3 −1
−det 0 1
3 −1 det 1 2
Reading homework: problem 14.2
120
Let’s multiply M adjM. For any matrix N, the i,j entry of MN is given by taking the dot product of the ith row of M and the jth column of N. Notice that the dot product of the ith row of M and the ith column of adj M is just the expansion by minors of det M in the ith row. Further, notice that the dot product of the ith row of M and the jth column of adjM with j ̸= i is the same as expanding M by minors, but with the jth row replaced by the ith row. Since the determinant of any matrix with a row repeated is zero, then these dot products are zero as well.
We know that the i,j entry of the product of two matrices is the dot product of the ith row of the first by the jth column of the second. Then:
M adjM = (detM)I
Thus, when detM ̸= 0, the adjoint gives an explicit formula for M−1.
Theorem 14.3. For M a square matrix with det M ̸= 0 (equivalently, if M is invertible), then
M−1= 1 adjM det M
The Adjoint Matrix
Example Continuing with the previous example,
Now, multiply:
3 −1 −1 2 0 2 adj 1 2 0 = −1 3 −1 .
3 −1 −12 0 1 2 0−1 3
0 1 1
1 −3 7
2 6 0 0 −1 = 0 6 0
0111−37 006
3 −1 −1−1 2 0 2
6 ⇒ 1 2 0 = 1 −1 3 −1
0 1 1 1 −3 7 121
Figure 1: A parallelepiped.
This process for finding the inverse matrix is sometimes called Cramer’s Rule . 14.3 Application: Volume of a Parallelepiped
Given three vectors u, v, w in R3, the parallelepiped determined by the three vectors is the “squished” box whose edges are parallel to u,v, and w as depicted in Figure 1.
From calculus, we know that the volume of this object is |u (v × w)|. This is the same as expansion by minors of the matrix whose columns are u,v,w. Then:
Volume = detu v w
References
Hefferon, Chapter Four, Section I.1 and I.3 Beezer, Chapter D, Section DM, Subsection DD Beezer, Chapter D, Section DM, Subsection CD Wikipedia:
122
Determinant
Elementary Matrix Cramer’s Rule
Review Problems
a b
1. Let M = c d . Show:
detM = 12(trM)2 − 12 tr(M2)
Suppose M is a 3 × 3 matrix. Find and verify a similar formula for
det M in terms of tr(M3), tr(M2), and tr M.
2. Suppose M = LU is an LU decomposition. Explain how you would
efficiently compute det M in this case.
3. In computer science, the complexity of an algorithm is (roughly) com- puted by counting the number of times a given operation is performed. Suppose adding or subtracting any two numbers takes a seconds, and multiplying two numbers takes m seconds. Then, for example, com- puting 2·6−5 would take a+m seconds.
(a) How many additions and multiplications does it take to compute the determinant of a general 2 × 2 matrix?
(b) Write a formula for the number of additions and multiplications it takes to compute the determinant of a general n × n matrix using the definition of the determinant. Assume that finding and multiplying by the sign of a permutation is free.
(c) How many additions and multiplications does it take to compute the determinant of a general 3 × 3 matrix using expansion by minors? Assuming m = 2a, is this faster than computing the determinant from the definition?
Problem 3 hint
123
15 Subspaces and Spanning Sets
It is time to study vector spaces more carefully and answer some fundamental questions.
1. Subspaces: When is a subset of a vector space itself a vector space? (This is the notion of a subspace.)
2. Linear Independence: Given a collection of vectors, is there a way to tell whether they are independent, or if one is a “linear combination” of the others?
3. Dimension: Is there a consistent definition of how “big” a vector space is?
4. Basis: How do we label vectors? Can we write any vector as a sum of some basic set of vectors? How do we change our point of view from vectors labeled one way to vectors labeled in another way?
Let’s start at the top!
15.1 Subspaces
Definition We say that a subset U of a vector space V is a subspace of V if U is a vector space under the inherited addition and scalar multiplication operations of V .
Example Consider a plane P in R3 through the origin: ax + by + cz = 0.
124
x
This equation can be expressed as the homogeneous system a b c y = 0, z
orMX=0withMthematrixa b c. IfX1 andX2 arebothsolutionsto MX = 0, then, by linearity of matrix multiplication, so is μX1 + νX2:
M(μX1 +νX2)=μMX1 +νMX2 =0.
So P is closed under addition and scalar multiplication. Additionally, P contains the origin (which can be derived from the above by setting μ = ν = 0). All other vector space requirements hold for P because they hold for all vectors in R3.
Theorem 15.1 (Subspace Theorem). Let U be a non-empty subset of a vector space V. Then U is a subspace if and only if μu1 +νu2 ∈ U for arbitrary u1,u2 in U, and arbitrary constants μ,ν.
Proof. One direction of this proof is easy: if U is a subspace, then it is a vector space, and so by the additive closure and multiplicative closure properties of vector spaces, it has to be true that μu1 +νu2 ∈ U for all u1,u2 in U and all constants constants μ, ν.
The other direction is almost as easy: we need to show that if μu1 +νu2 ∈ U for all u1,u2 in U and all constants μ,ν, then U is a vector space. That is, we need to show that the ten properties of vector spaces are satisfied. We know that the additive closure and multiplicative closure properties are satisfied. Each of the other eight properties is true in U because it is true in V . The details of this are left as an exercise.
Note that the requirements of the subspace theorem are often referred to as “closure”.
125
From now on, we can use this theorem to check if a set is a vector space. That is, if we have some set U of vectors that come from some bigger vector space V , to check if U itself forms a smaller vector space we need check only two things: if we add any two vectors in U, do we end up with a vector in U? And, if we multiply any vector in U by any constant, do we end up with a vector in U? If the answer to both of these questions is yes, then U is a vector space. If not, U is not a vector space.
Reading homework: problem 15.1
15.2 Building Subspaces
Consider the set 103
U= 0,1 ⊂R. 0 0
Because U consists of only two vectors, it clear that U is not a vector space, since any constant multiple of these vectors should also be in U. For example, the 0-vector is not in U, nor is U closed under vector addition.
But we know that any two vectors define a plane:
In this case, the vectors in U define the xy-plane in R3. We can consider the xy-plane as the set of all vectors that arise as a linear combination of the two vectors in U. Call this set of all linear combinations the span of U:
1 0
span(U)= x0+y1x,y∈R . 0 0
126
Notice that any vector in the xy-plane is of the form x 1 0
y = x 0 + y 1 ∈ span(U). 000
Definition LetV beavectorspaceandS={s1,s2,…}⊂V asubsetofV. Then the span of S is the set:
span(S)={r1s1 +r2s2 +···+rNsN|ri ∈R,N ∈N}.
That is, the span of S is the set of all finite linear combinations8 of elements of S. Any finite sum of the form (a constant times s1 plus a constant times s2 plus a constant times s3 and so on) is in the span of S.
It is important that we only allow finite linear combinations. In the definition above, N must be a finite number. It can be any finite number, but it must be finite.
0 Example Let V = R3 and X ⊂ V be the x-axis. Let P = 1, and set
P.
−12 −12 −12 0
vector 17.5 is in span(S), because 17.5 = 0 + 17.5 1 . Similarly,
0000
0
The elements of span(S) are linear combinations of vectors in the x-axis and the vector
any vector of the form
S=X∪P.
2 2 2 0
The vector 3 is in span(S), because 3 = 0 + 3 1 . Similarly, the 0 000
x 0 x 0 + y 1 = y
000
is in span(S). On the other hand, any vector in span(S) must have a zero in the z-coordinate. (Why?)
So span(S) is the xy-plane, which is a vector space. (Try drawing a picture to verify this!)
8Usually our vector spaces are defined over R, but in general we can have vector spaces defined over different base fields such as C or Z2. The coefficients ri should come from whatever our base field is (usually R).
127
Reading homework: problem 15.2
Lemma 15.2. For any subset S ⊂ V , span(S) is a subspace of V .
Proof. We need to show that span(S) is a vector space.
It suffices to show that span(S) is closed under linear combinations. Let
u, v ∈ span(S) and λ, μ be constants. By the definition of span(S), there are constants ci and di (some of which could be zero) such that:
u = c1s1 +c2s2 +···
v = d1s1+d2s2+···
⇒λu+μv = λ(c1s1 +c2s2 +···)+μ(d1s1 +d2s2 +···)
= (λc1 +μd1)s1 +(λc2 +μd2)s2 +···
This last sum is a linear combination of elements of S, and is thus in span(S). Then span(S) is closed under linear combinations, and is thus a subspace of V .
Note that this proof, like many proofs, consisted of little more than just writing out the definitions.
Example For which values of a does
11a3 span 0,2,1 =R?
a −3 0 x
Given an arbitrary vector y in R3, we need to find constants r1,r2,r3 such that z
1 1 a x
r1 0 +r2 2 +r3 1 = y . a −3 0 z
We can write this as a linear system in the unknowns r1,r2,r3 as follows: 1 1 a r1 x
0 2 1r2 = y. a−30r3 z
128
1 1 a
If the matrix M = 0 2 1 is invertible, then we can find a solution
x
for any vector y ∈ R3. z
Therefore we should choose a so that M is invertible:
i.e., 0̸=detM =−2a2 +3+a=−(2a−3)(a+1).
ThenthespanisR3 ifandonlyifa̸=−1,23.
Linear systems as spanning sets
References
Hefferon, Chapter Two, Section I.2: Subspaces and Spanning Sets Beezer, Chapter VS, Section S
Beezer, Chapter V, Section LC
Beezer, Chapter V, Section SS
Wikipedia:
Linear Subspace Linear Span
Review Problems
1. (Subspace Theorem) Suppose that V is a vector space and that U ⊂ V is a subset of V . Show that
μu1 + νu2 ∈ U for all u1, u2 ∈ U, μ, ν ∈ R
implies that U is a subspace of V . (In other words, check all the vector
space requirements for U.)
129
a −3 0
M−1 y = r2
x r1 z r3
2. Let P3R be the vector space of polynomials of degree 3 or less in the variable x. Check whether
x−x3 ∈span{x2,2x+x2,x+x3} Hint for Problem 2
3. Let U and W be subspaces of V. Are: (a) U ∪ W
(b) U ∩ W
also subspaces? Explain why or why not. Draw examples in R3.
Hint for Problem 3
130
16 Linear Independence
Consider a plane P that includes the origin in R3 and a collection {u, v, w} of non-zero vectors in P:
If no two of u,v and w are parallel, then P = span{u,v,w}. But any two vectors determines a plane, so we should be able to span the plane using only two of the vectors u,v,w. Then we could choose two of the vectors in {u,v,w} whose span is P, and express the other as a linear combination of those two. Suppose u and v span P. Then there exist constants d1,d2 (not both zero) such that w = d1u + d2v. Since w can be expressed in terms of u and v we say that it is not independent. More generally, the relationship
c1u+c2v+c3w=0 ci ∈R, someci ̸=0 expresses the fact that u, v, w are not all independent.
Definition We say that the vectors v1, v2, . . . , vn are linearly dependent if thereexistconstants9 c1,c2,…,cn notallzerosuchthat
c1v1 +c2v2 +···+cnvn =0. Otherwise,thevectorsv1,v2,…,vn arelinearlyindependent.
9Usually our vector spaces are defined over R, but in general we can have vector spaces defined over different base fields such as C or Z2. The coefficients ci should come from whatever our base field is (usually R).
131
Example Consider the following vectors in R3:
4 −3 5 −1
v1 = −1 , v2 = 7 , v3 = 12 , v4 = 1 . 3 4 17 0
Are these vectors linearly independent?
No, since 3v1 + 2v2 − v3 + v4 = 0, the vectors are linearly dependent.
Worked Example
In the above example we were given the linear combination 3v1 + 2v2 − v3 + v4 seemingly by magic. The next example shows how to find such a linear combination, if it exists.
Example Consider the following vectors in R3:
0 1 1
v1 =0, v2 =2, v3 =2. 113
Are they linearly independent?
We need to see whether the system
c1v1 + c2v2 + c3v3 = 0
has any solutions for c1,c2,c3. We can rewrite this as a homogeneous system:
c1
v1 v2 v3 c2 = 0.
c3
This system has solutions if and only if the matrix M = v1 v2
we should find the determinant of M:
0 1 1 1 1 detM =det0 2 2=det 2 2 =0.
113
v3 is singular, so
Therefore nontrivial solutions exist. At this point we know that the vectors are linearly dependent. If we need to, we can find coefficients that demonstrate linear dependence by solving the system of equations:
0110 1130 1020 0 2 2 0∼0 1 1 0∼0 1 1 0.
1130 0000 0000
132
Then c3 = μ, c2 = −μ, and c1 = −2μ. Now any choice of μ will produce coefficients c1, c2, c3 that satisfy the linear equation. So we can set μ = 1 and obtain:
c1v1 +c2v2 +c3v3 =0⇒−2v1 −v2 +v3 =0. Reading homework: problem 16.1
Theorem 16.1 (Linear Dependence). A set of non-zero vectors {v1, . . . , vn} is linearly dependent if and only if one of the vectors vk is expressible as a linear combination of the preceding vectors.
Proof. The theorem is an if and only if statement, so there are two things to show.
i. First, we show that if vk = c1v1 + · · · ck−1vk−1 then the set is linearly dependent.
This is easy. We just rewrite the assumption:
c1v1 +···+ck−1vk−1 −vk +0vk+1 +···+0vn =0.
This is a vanishing linear combination of the vectors {v1, . . . , vn} with not all coefficients equal to zero, so {v1, . . . , vn} is a linearly dependent set.
ii. Now, we show that linear dependence implies that there exists k for which vk is a linear combination of the vectors {v1, . . . , vk−1}.
The assumption says that
c1v1 +c2v2 +···+cnvn =0.
Take k to be the largest number for which ck is not equal to zero. So: c1v1 +c2v2 +···+ck−1vk−1 +ckvk =0.
(Note that k > 1, since otherwise we would have c1v1 = 0 ⇒ v1 = 0, contradicting the assumption that none of the vi are the zero vector.)
As such, we can rearrange the equation:
c1v1 +c2v2 +···+ck−1vk−1 = −ckvk
c1 c2 ck−1
⇒−ckv1−ckv2−···−ckvk−1 =vk. 133
Therefore we have expressed vk as a linear combination of the previous vectors, and we are done.
Worked proof
Example Consider the vector space P2(t) of polynomials of degree less than or equal to 2. Set:
v1 = 1+t
v2 = 1+t2
v3 = t+t2
v4 = 2+t+t2 v5 = 1+t+t2.
The set {v1, . . . , v5} is linearly dependent, because v4 = v1 + v2.
We have seen two different ways to show a set of vectors is linearly depen- dent: we can either find a linear combination of the vectors which is equal to zero, or we can express one of the vectors as a linear combination of the other vectors. On the other hand, to check that a set of vectors is linearly independent, we must check that every linear combination of our vectors with non-vanishing coefficients gives something other than the zero vector. Equiv- alently, to show that the set v1,v2,…,vn is linearly independent, we must show that the equation c1v1 + c2v2 + · · · + cnvn = 0 has no solutions other thanc1 =c2 =···=cn =0.
Example Consider the following vectors in R3:
0 2 1
v1 =0, v2 =2, v3 =4. 213
Are they linearly independent?
We need to see whether the system
c1v1 + c2v2 + c3v3 = 0 134
has any solutions for c1,c2,c3. We can rewrite this as a homogeneous system: c1
v1 v2 v3 c2 = 0. c3
This system has solutions if and only if the matrix M = v1 v2 we should find the determinant of M:
0 2 1 2 1 detM =det0 2 4=2det 2 4 =12.
213
v3 is singular, so
Since the matrix M has non-zero determinant, the only solution to the system of equations
c1
v1 v2 v3 c2 = 0
c3
is c1 = c2 = c3 = 0. (Why?) So the vectors v1, v2, v3 are linearly independent.
Reading homework: problem 16.2
Nowsupposevectorsv1,…,vn arelinearlydependent, c1v1 +c2v2 +···+cnvn =0
with c1 ̸= 0. Then:
because any x ∈ span{v1,…,vn} is given by
span{v1,…,vn} = span{v2,…,vn} x = a1v1+···anvn
1c2 cn 2 n = a −cv2−···−cvn +av2+···+avn
11
2 1c2 n 1cn
= a−ac v2+···+a−ac vn.
11
Then x is in span{v2,…,vn}.
When we write a vector space as the span of a list of vectors, we would
like that list to be as short as possible (we will explore this idea further in lecture 17). This can be achieved by iterating the above procedure.
135
Example In the above example, we found that v4 = v1 + v2. In this case, any expression for a vector as a linear combination involving v4 can be turned into a combination without v4 by making the substitution v4 = v1 + v2.
Then:
S = span{1+t,1+t2,t+t2,2+t+t2,1+t+t2} = span{1+t,1+t2,t+t2,1+t+t2}.
Now we notice that 1+t+t2 = 21(1+t)+ 12(1+t2)+ 21(t+t2). So the vector 1 + t + t2 = v5 is also extraneous, since it can be expressed as a linear combination of the remaining three vectors, v1, v2, v3. Therefore
S = span{1 + t, 1 + t2, t + t2}.
In fact, you can check that there are no (non-zero) solutions to the linear system
c1(1 + t) + c2(1 + t2) + c3(t + t2) = 0.
Therefore the remaining vectors {1 + t, 1 + t2, t + t2} are linearly independent, and span the vector space S. Then these vectors are a minimal spanning set, in the sense that no more vectors can be removed since the vectors are linearly independent. Such a set is called a basis for S.
Example Let B3 be the space of 3 × 1 bit-valued matrices (i.e., column vectors). Is
the following subset linearly independent?
110
1,0,1 011
If the set is linearly dependent, then we can find non-zero solutions to the system:
1 1 0
c1 1 +c2 0 +c3 1 =0, 011
which becomes the linear system
1 1 0 c1
1 0 1c2 = 0.
011 c3
Solutions exist if and only if the determinant of the matrix is non-zero. But:
1 1 0 0 1 1 1
det1 0 1=1det 1 1 −1det 0 1 =−1−1=1+1=0
011
Therefore non-trivial solutions exist, and the set is not linearly independent.
136
To summarize, the key definition in this lecture was:
Perhaps the most useful Theorem was:
References
Hefferon, Chapter Two, Section II: Linear Independence Hefferon, Chapter Two, Section III.1: Basis
Beezer, Chapter V, Section LI
Beezer, Chapter V, Section LDS
Beezer, Chapter VS, Section LISS, Subsection LI Wikipedia:
Linear Independence Basis
137
Review Problems
1. Let Bn be the space of n × 1 bit-valued matrices (i.e., column vectors) over the field Z2 := Z/2Z. Remember that this means that the co- efficients in any linear combination can be only 0 or 1, with rules for adding and multiplying coefficients given here.
(a) How many different vectors are there in Bn?
(b) Find a collection S of vectors that span B3 and are linearly inde-
pendent. In other words, find a basis of B3.
(c) Write each other vector in B3 as a linear combination of the vectors
in the set S that you chose.
(d) Would it be possible to span B3 with only two vectors?
Hint for Problem 1
2. Let ei be the vector in Rn with a 1 in the ith position and 0’s in every other position. Let v be an arbitrary vector in Rn.
(a) Show that the collection {e1, . . . , en} is linearly independent.
(b) Demonstrate that v = ni=1 (v ei )ei .
(c) The span{e1, . . . , en} is the same as what vector space?
138
17 Basis and Dimension
In Lecture 16, we established the notion of a linearly independent set of vectors in a vector space V , and of a set of vectors that span V . We saw that any set of vectors that span V can be reduced to some minimal collection of linearly independent vectors; such a set is called a basis of the subspace V .
Definition Let V be a vector space. Then a set S is a basis for V if S is linearly independent and V = span S.
If S is a basis of V and S has only finitely many elements, then we say that V is finite-dimensional. The number of vectors in S is the dimension of V .
Suppose V is a finite-dimensional vector space, and S and T are two different bases for V . One might worry that S and T have a different number of vectors; then we would have to talk about the dimension of V in terms of the basis S or in terms of the basis T. Luckily this isn’t what happens. Later in this section, we will show that S and T must have the same number of vectors. This means that the dimension of a vector space does not depend on the basis. In fact, dimension is a very important way to characterize of any vector space V .
Example Pn(t) has a basis {1, t, . . . , tn}, since every polynomial of degree less than or equal to n is a sum
a01+a1t+···+antn, ai ∈R
so Pn(t) = span{1, t, . . . , tn}. This set of vectors is linearly independent: If the polynomial p(t) = c01+c1t+···+cntn = 0, then c0 = c1 = ··· = cn = 0, so p(t) is the zero polynomial.
Then Pn(t) is finite dimensional, and dim Pn(t) = n + 1.
Theorem 17.1. Let S = {v1, . . . , vn} be a basis for a vector space V . Then every vector w ∈ V can be written uniquely as a linear combination of vectors in the basis S:
w = c1v1 + · · · + cnvn.
Proof. Since S is a basis for V, then spanS = V, and so there exist con-
stantsci suchthatw=c1v1+···+cnvn. 139
Suppose there exists a second set of constants di such that w = d1v1 + · · · + dnvn .
Then:
0V =w−w
= c1v1 +···+cnvn −d1v1 +···+dnvn = (c1 −d1)v1 +···+(cn −dn)vn.
If it occurs exactly once that ci ̸= di, then the equation reduces to 0 = (ci − di)vi, which is a contradiction since the vectors vi are assumed to be non-zero.
If we have more than one i for which ci ̸= di, we can use this last equation to write one of the vectors in S as a linear combination of other vectors in S, which contradicts the assumption that S is linearly independent. Then for every i, ci = di.
Proof of Theorem
Next, we would like to establish a method for determining whether a collection of vectors forms a basis for Rn. But first, we need to show that any two bases for a finite-dimensional vector space has the same number of vectors.
Lemma 17.2. If S = {v1,…,vn} is a basis for a vector space V and T = {w1, . . . , wm} is a linearly independent set of vectors in V , then m ≤ n.
The idea of the proof is to start with the set S and replace vectors in S one at a time with vectors from T , such that after each replacement we still have a basis for V .
Reading homework: problem 17.1
Proof. Since S spans V , then the set {w1, v1, . . . , vn} is linearly dependent. Then we can write w1 as a linear combination of the vi; using that equation, we can express one of the vi in terms of w1 and the remaining vj with j ̸=
140
i. Then we can discard one of the vi from this set to obtain a linearly independent set that still spans V . Now we need to prove that S1 is a basis; we need to show that S1 is linearly independent and that S1 spans V .
The set S1 = {w1,v1,…,vi−1,vi+1,…,vn} is linearly independent: By the previous theorem, there was a unique way to express w1 in terms of the set S. Now, to obtain a contradiction, suppose there is some k and constants ci such that
vk =c0w1 +c1v1 +···+ci−1vi−1 +ci+1vi+1 +···+cnvn.
Then replacing w1 with its expression in terms of the collection S gives a way to express the vector vk as a linear combination of the vectors in S, which contradicts the linear independence of S. On the other hand, we cannot express w1 as a linear combination of the vectors in {vj|j ̸= i}, since the expression of w1 in terms of S was unique, and had a non-zero coefficient on the vector vi. Then no vector in S1 can be expressed as a combination of other vectors in S1, which demonstrates that S1 is linearly independent.
ThesetS1 spansV: Foranyu∈V,wecanexpressuasalinearcom- bination of vectors in S. But we can express vi as a linear combination of vectors in the collection S1; rewriting vi as such allows us to express u as a linear combination of the vectors in S1.
Then S1 is a basis of V with n vectors.
We can now iterate this process, replacing one of the vi in S1 with w2, and so on. If m ≤ n, this process ends with the set Sm = {w1,…,wm, vi1,…,vin−m}, which is fine.
Otherwise, we have m > n, and the set Sn = {w1,…,wn} is a basis for V . But we still have some vector wn+1 in T that is not in Sn. Since Sn is a basis, we can write wn+1 as a combination of the vectors in Sn, which
141
contradicts the linear independence of the set T. Then it must be the case that m ≤ n, as desired.
Worked Example
Corollary 17.3. For a finite-dimensional vector space V , any two bases for V have the same number of vectors.
Proof. Let S and T be two bases for V . Then both are linearly independent sets that span V . Suppose S has n vectors and T has m vectors. Then by the previous lemma, we have that m ≤ n. But (exchanging the roles of S and T in application of the lemma) we also see that n ≤ m. Then m = n, as desired.
Reading homework: problem 17.2
17.1 Bases in Rn.
From one of the review questions, we know that
Rn = span
100 0 1 0
.,.,…,. , . . .
001
and that this set of vectors is linearly independent. So this set of vectors is a basis for Rn, and dimRn = n. This basis is often called the standard or canonical basis for Rn. The vector with a one in the ith position and zeros everywhere else is written ei. It points in the direction of the ith coordinate axis, and has unit length. In multivariable calculus classes, this basis is often written {i, j, k} for R3.
Note that it is often convenient to order basis elements, so rather than writing a set of vectors, we would write a list. This is called an ordered basis. For example, the canonical ordered basis for Rn is (e1, e2, . . . , en). The possibility to reorder basis vectors is not the only way in which bases are non-unique:
142
Bases are not unique. While there exists a unique way to express a vector in terms of any particular basis, bases themselves are far from unique. For example, both of
the sets:
1 0 1 1 0 , 1 and 1 , −1
are bases for R2. Rescaling any vector in one of these sets is already enough to show that R2 has infinitely many bases. But even if we require that all of the basis vectors have unit length, it turns out that there are still infinitely many bases for R2. (See Review Question 3.)
To see whether a collection of vectors S = {v1, . . . , vm} is a basis for Rn, we have to check that they are linearly independent and that they span Rn. From the previous discussion, we also know that m must equal n, so assume S has n vectors.
If S is linearly independent, then there is no non-trivial solution of the equation
0=x1v1 +···+xnvn.
Let M be a matrix whose columns are the vectors vi. Then the above equa-
tion is equivalent to requiring that there is a unique solution to MX = 0.
To see if S spans Rn, we take an arbitrary vector w and solve the linear system
w = x1v1 + · · · + xnvn 143
in the unknowns ci. For this, we need to find a unique solution for the linear system MX = w.
Thus, we need to show that M−1 exists, so that X = M−1w
is the unique solution we desire. Then we see that S is a basis for V if and only if det M ̸= 0.
Theorem 17.4. Let S = {v1, . . . , vm} be a collection of vectors in Rn. Let M be the matrix whose columns are the vectors in S. Then S is a basis for V if and only if m is the dimension of V and
Example Let
det M ̸= 0.
1 0 1 1
S= 0,1 andT= 1,−1 .
1 0 2
ThensetMS= 0 1 .SincedetMS=1̸=0,thenSisabasisforR.
1 1 2
Likewise,setMT = 1 −1 . SincedetMT =−2̸=0,thenT isabasisforR . References
Hefferon, Chapter Two, Section II: Linear Independence Hefferon, Chapter Two, Section III.1: Basis
Beezer, Chapter VS, Section B, Subsections B-BNM Beezer, Chapter VS, Section D, Subsections D-DVS Wikipedia:
Linear Independence Basis
Review Problems
1. (a) Draw the collection of all unit vectors in R2. 144
1 2
(b) LetSx = 0 ,x ,wherexisaunitvectorinR . Forwhichx
is Sx a basis of R2?
2. Let Bn be the vector space of column vectors with bit entries 0, 1. Write down every basis for B1 and B2. How many bases are there for B3? B4? Can you make a conjecture for the number of bases for Bn?
(Hint: You can build up a basis for Bn by choosing one vector at a time, such that the vector you choose is not in the span of the previous vectors you’ve chosen. How many vectors are in the span of any one vector? Any two vectors? How many vectors are in the span of any k vectors, for k ≤ n?)
Hint for Problem 2
3. Suppose that V is an n-dimensional vector space.
(a) Show that any n linearly independent vectors in V form a basis.
(Hint: Let {w1, . . . , wm} be a collection of n linearly independent vectors in V, and let {v1,…,vn} be a basis for V. Apply the method of Lemma 17.2 to these two sets of vectors.)
(b) Show that any set of n vectors in V which span V forms a basis for V .
(Hint: Suppose that you have a set of n vectors which span V but do not form a basis. What must be true about them? How could you get a basis from this set? Use Corollary 17.3 to derive a contradiction.)
4. Let S be a collection of vectors {v1,…,vn} in a vector space V. Show that if every vector w in V can be expressed uniquely as a linear combi- nation of vectors in S, then S is a basis of V . In other words: suppose that for every vector w in V , there is exactly one set of constants c1,…,cn so that c1v1 +···+cnvn = w. Show that this means that the set S is linearly independent and spans V . (This is the converse to the theorem in the lecture.)
145
5. Vectors are objects that you can add together; show that the set of all linear transformations mapping R3 → R is itself a vector space. Find a basis for this vector space. Do you think your proof could be modified to work for linear transformations Rn → R?
Hint: Represent R3 as column vectors, and argue that a linear trans- formation T : R3 → R is just a row vector. If you are stuck or just curious, see dual space.
146
18 Eigenvalues and Eigenvectors
Before discussing eigenvalues and eigenvectors, we need to have a better un- derstanding of the relationship between linear transformations and matrices. Consider, as an example the plane R2
The information of the vector v can be transmitted in many ways. In the basis {e1, e2} it is the ordered pair (x, y) = (2, 2) while in the basis {f1, f2} is corresponds to (s,t) = (2,1). This can be confusing, the idea to keep firm in your mind is that the vector space and its elements—vectors—are what really “exist”. Typically they will correspond to configurations of the real world system you are trying to describe. On the other hand, things like coordinate axes and “components of a vector” (x,y) are just mathematical tools used to label vectors.
18.1 Matrix of a Linear Transformation
Let V and W be vector spaces, with bases S = {e1,…,en} and T = {f1, . . . , fm} respectively. Since these are bases, there exist constants vi and wj suchthatanyvectorsv∈V andw∈W canbewrittenas:
v = v1e1 +v2e2 +···+vnen w = w1f1 +w2f2 +···+wmfm
147
We call the coefficients v1, . . . , vn the components of v in the basis10 {e1, . . . , en}. It is often convenient to arrange the components vi in a column vector and the basis vector in a row vector by writing
v1
v2 v= e1 e2 ··· en ..
. vn
Worked Example
Example Consider the basis S = {1 − t, 1 + t} for the vector space P1 (t). The vector v = 2t has components v1 = −1, v2 = 1, because
v=−1(1−t)+1(1+t)=1−t 1+t −1 .
1
We may consider these components as vectors in Rn and Rm: v1 w1
. n . m .∈R, .∈R.
vn wm
Now suppose we have a linear transformation L: V → W. Then we can expect to write L as an m × n matrix, turning an n-dimensional vector of coefficients corresponding to v into an m-dimensional vector of coefficients for w.
Using linearity, we write:
L(v) = L(v1e1 +v2e2 +···+vnen)
= v1L(e1) + v2L(e2) + · · · + vnL(en) v1
v2 = L(e1) L(e2) ··· L(en) . .
. vn
10To avoid confusion, it helps to notice that components of a vector are almost always labeled by a superscript, while basis vectors are labeled by subscripts in the conventions of these lecture notes.
148
This is a vector in W. Let’s compute its components in W.
We know that for each ej, L(ej) is a vector in W, and can thus be written uniquely as a linear combination of vectors in the basis T . Then we can find
coefficients Mji such that:
L(e)=fM1+···+f Mm=fMi=f f ··· f Mj.
L(v1e1 +v2e2 +···+vnen)
L(v) =
= v1L(e1) + v2L(e2) + · · · + vnL(en)
m
= L(ej)vj
i=1 m
= (Mj1f1 + · · · + Mjmfm)vj i=1
mn = f Mivj
=
M1 M21 ··· Mn1v1
M2 M2 v2 12 f1 f2 ···fm. ..
M1m ··· Mnm vn
149
ij i=1 j=1
M j1 m2
j1j mj ij12 m.
i=1
. Mjm
We’ve written the Mji on the right side of the f’s to agree with our previous notation for matrix multiplication. We have an “up-hill rule” where the matching indices for the multiplied objects run up and to the right, like so: fiMji.
Now Mji is the ith component of L(ej). Regarding the coefficients Mji as a matrix, we can see that the jth column of M is the coefficients of L(ej) in the basis T.
Then we can write:
The second last equality is the definition of matrix multiplication which is obvious from the last line. Thus:
v1 M1 … Mn1v1 . L . . .
.→ . . ., vn M1m … Mnm vn
and M = (Mji) is called the matrix of L. Notice that this matrix depends on a choice of bases for both V and W. Also observe that the columns of M are computed by examining L acting on each basis vector in V expanded in the basis vectors of W.
Example Let L: P1(t) → P1(t), such that L(a + bt) = (a + b)t. Since V = P1(t) = W, let’s choose the same basis for V and W. We’ll choose the basis {1−t,1+t} for this example.
Thus:
L(1−t) = (1−1)t= 0 =(1−t)·0+(1+t)·0=(1−t) (1+t)0 0
L(1+t) = (1+1)t=2t =(1−t)·−1+(1+t)·1=(1−t) (1+t)−1 1
0 −1 ⇒M=01
To obtain the last line we used that fact that the columns of M are just the coefficients of L on each of the basis vectors; this always makes it easy to write down M in terms of the basis we have chosen.
Reading homework: problem 20.1
Example Consider a linear transformation
L: R2 → R2 .
1 a 0 b
Suppose we know that L 0 = c and L 1 = d . Then, because of linearity,
x we can determine what L does to any vector y :
x 1 0 1 0 a b ax + by L y =L(x 0 +y 1 )=xL 0 +yL 1 =x c +y d = cx+dy .
150
x
Now notice that for any vector y , we have
a b x ax + by x c d y = cx+dy =L y .
a b
Then the matrix c d acts by matrix multiplication in the same way that L does.
1 0 ThisisthematrixofLinthebasis 0 , 1 .
Example Any vector in Rn can be written as a linear combination of the standard basis vectors {ei|i ∈ {1,…,n}}. The vector ei has a one in the ith position, and zeros everywhere else. I.e.
1 0
e1 = . , .
0 1
e2 = . ,… .
0 0
en = . . .
001
Then to find the matrix of any linear transformation L: Rn → Rn, it suffices to know what L(ei) is for every i.
For any matrix M, observe that Mei is equal to the ith column of M. Then if the ith column of M equals L(ei) for every i, then Mv = L(v) for every v ∈ Rn. Then the matrix representing L in the standard basis is just the matrix whose ith column is L(ei ).
18.2 Invariant Directions
Have a look at the linear transformation L depicted below:
151
It was picked at random by choosing a pair of vectors L(e1) and L(e2) as the outputs of L acting on the canonical basis vectors. Notice how the unit square with a corner at the origin get mapped to a parallelogram. The second line of the picture shows these superimposed on one another. Now look at the second picture on that line. There, two vectors f1 and f2 have been carefully chosen such that if the inputs into L are in the parallelogram spanned by f1 and f2, the outputs also form a parallelogram with edges lying along the same two directions. Clearly this is a very special situation that should correspond to a interesting properties of L.
Now lets try an explicit example to see if we can achieve the last picture:
Example Consider the linear transformation L such that 1 −4 0 3
L 0 = −10 and L 1 = 7 ,
so that the matrix of L is
−4 3 −10 7 .
Recall that a vector is a direction and a magnitude; L applied to
both the direction and the magnitude of the vectors given to it.
1 0 0 or 1
changes
Notice that
3 −4·3+3·5 3 L 5 = −10·3+7·5 = 5 .
152
3 Then L fixes the direction (and actually also the magnitude) of the vector v1 = 5 .
1
In fact also the vector v2 = 2 has its direction fixed by M.
Reading homework: problem 18.1
Now, notice that any vector with the same direction as v1 can be written as cv1 for some constant c. Then L(cv1) = cL(v1) = cv1, so L fixes every vector pointing in the same direction as v1.
Also notice that
1 −4 · 1 + 3 · 2 2 1 L 2 = −10·1+7·2 = 4 =2 2 .
1
Then L fixes the direction of the vector v2 = 2 but stretches v2 by a factor of 2.
Now notice that for any constant c, L(cv2) = cL(v2) = 2cv2. Then L stretches every vector pointing in the same direction as v2 by a factor of 2.
In short, given a linear transformation L it is sometimes possible to find a vector v ̸= 0 and constant λ ̸= 0 such that
L(v) = λv.
153
We call the direction of the vector v an invariant direction. In fact, any vector pointing in the same direction also satisfies the equation: L(cv) = cL(v) = λcv. The vector v is called an eigenvector of L, and λ is an eigen- value. Since the direction is all we really care about here, then any other vector cv (so long as c ̸= 0) is an equally good choice of eigenvector. Notice that the relation “u and v point in the same direction” is an equivalence relation.
In our example of the linear transformation L with matrix −4 3
−10 7 ,
we have seen that L enjoys the property of having two invariant directions, represented by eigenvectors v1 and v2 with eigenvalues 1 and 2, respectively. It would be very convenient if we could write any vector w as a linear combination of v1 and v2. Suppose w = rv1 +sv2 for some constants r and s.
Then:
L(w) = L(rv1 + sv2) = rL(v1) + sL(v2) = rv1 + 2sv2.
Now L just multiplies the number r by 1 and the number s by 2. If we could write this as a matrix, it would look like:
1 0 s 02t
which is much slicker than the usual scenario
x a bx ax + by
L y = c d y = cx+dy .
Here, s and t give the coordinates of w in terms of the vectors v1 and v2. In the previous example, we multiplied the vector by the matrix L and came up with a complicated expression. In these coordinates, we can see that L is a very simple diagonal matrix, whose diagonal entries are exactly the eigenvalues of L.
This process is called diagonalization. It makes complicated linear sys- tems much easier to analyze.
Reading homework: problem 18.2
Now that we’ve seen what eigenvalues and eigenvectors are, there are a
number of questions that need to be answered. 154
How do we find eigenvectors and their eigenvalues?
How many eigenvalues and (independent) eigenvectors does a given
linear transformation have?
When can a linear transformation be diagonalized?
We’ll start by trying to find the eigenvectors for a linear transformation. 2×2 Example
Example Let L: R2 → R2 such that L(x,y) = (2x + 2y,16x + 6y). First, we can find the matrix of L:
y −→ 16 6 x
We want to find an invariant direction v = y L(v) = λv
or, in matrix notation,
y .
such that
x L 2 2x
2 2x 16 6 y
2 2x
x = λ y
λ 0x
⇔ 16 6 y = 0 λ y 2−λ 2 x 0
⇔ 16 6−λ y = 0 .
This is a homogeneous system, so it only has solutions when the matrix 16 is singular. In other words,
2−λ 2
det 16 6−λ =0
⇔(2−λ)(6−λ)−32 = 0 ⇔λ2−8λ−20 = 0 ⇔(λ−10)(λ+2) = 0
155
6 − λ
2−λ 2
For any square n × n matrix M , the polynomial in λ given by PM (λ) = det(λI − M ) = (−1)n det(M − λI )
is called the characteristic polynomial of M, and its roots are the eigenvalues of M. In this case, we see that L has two eigenvalues, λ1 = 10 and λ2 = −2. To find the eigenvectors, we need to deal with these two cases separately. To do so, we solve the
2−λ 2 x 0
linear system 16 6 − λ y = 0 with the particular eigenvalue λ plugged
in to the matrix.
λ = 10:
We solve the linear system
−8 2x 0
16 −4 y = 0 . x
Both equations say that y = 4x, so any vector 4x will do. Since we only need the direction of the eigenvector, we can pick a value for x. Setting x = 1
1 is convenient, and gives the eigenvector v1 = 4 .
We solve the linear system
4 2x 0 16 8 y = 0 .
Here again both equations agree, because we chose λ to make the system 1
λ = −2:
singular. We see that y = −2x works, so we can choose v2 = −2 . In short, our process was the following:
Find the characteristic polynomial of the matrix M for L, given by11 det(λI − M).
Find the roots of the characteristic polynomial; these are the eigenvalues of L. For each eigenvalue λi, solve the linear system (M − λiI)v = 0 to obtain an
eigenvector v associated to λi.
Jordan block example
11It is often easier (and equivalent if you only need the roots) to compute det(M − λI). 156
References
Hefferon, Chapter Three, Section III.1: Representing Linear Maps with Ma- trices
Hefferon, Chapter Five, Section II.3: Eigenvalues and Eigenvectors
Beezer, Chapter E, Section EE
Wikipedia:
Eigen*
Characteristic Polynomial
Linear Transformations (and matrices thereof)
Review Problems
2 1
1.LetM = 0 2 . FindalleigenvaluesofM. DoesM havetwo
independent12 eigenvectors? Can M be diagonalized?
2. ConsiderL:R2 →R2withL(x,y)=(xcosθ+ysinθ,−xsinθ+ycosθ).
1 0
(a) WritethematrixofLinthebasis 0 , 1 .
(b) When θ ̸= 0, explain how L acts on the plane. Draw a picture.
(c) Do you expect L to have invariant directions?
(d) Try to find real eigenvalues for L by solving the equation L(v) = λv.
√
(e) Are there complex eigenvalues for L, assuming that i = exists?
−1
3. Let L be the linear transformation L: R3 → R3 given by L(x,y,z) = (x+y,x+z,y+z). Let ei be the vector with a one in the ith position and zeros in all other positions.
(a) Find Lei for each i. 12Independence of vectors is explained here.
157
m 1 1 m 12 m 13
(b) Given a matrix M = m21 m2 m23, what can you say about m31 m32 m3
Mei for each i?
(c) Find a 3 × 3 matrix M representing L. Choose three nonzero vectors pointing in different directions and show that Mv = Lv for each of your choices.
(d) Find the eigenvectors and eigenvalues of M.
4. Let A be a matrix with eigenvector v with eigenvalue λ. Show that v is also an eigenvector for A2 and what is its eigenvalue? How about for An where n ∈ N? Suppose that A is invertible, show that v is also an eigenvector for A−1.
5. A projection is a linear operator P such that P2 = P. Let v be an eigenvector with eigenvalue λ for a projection P, what are all possible values of λ? Show that every projection P has at least one eigenvector.
Note that every complex matrix has at least 1 eigenvector, but you need to prove the above for any field.
158
19 Eigenvalues and Eigenvectors II
In Lecture 18, we developed the idea of eigenvalues and eigenvectors in the case of linear transformations R2 → R2. In this section, we will develop the idea more generally.
Eigenvalues
Definition For a linear transformation L : V → V , then λ is an eigenvalue of L with eigenvector v ̸= 0V if
Lv = λv.
This equation says that the direction of v is invariant (unchanged) under L. Let’s try to understand this equation better in terms of matrices. Let V be a finite-dimensional vector space and let L: V → V. Since we can represent L by a square matrix M, we find eigenvalues λ and associated
eigenvectors v by solving the homogeneous system (M − λI)v = 0.
This system has non-zero solutions if and only if the matrix
M − λI is singular, and so we require that
det(λI − M) = 0.
The left hand side of this equation is a polynomial in the variable λ called the characteristic polynomial PM (λ) of M . For an n × n matrix, the characteristic polynomial has degree n. Then
PM(λ)=λn +c1λn−1 +···+cn.
Notice that PM (0) = det(−M ) = (−1)n det M .
The fundamental theorem of algebra states that any polynomial can be
factored into a product of linear terms over C. Then there exists a collection of n complex numbers λi (possibly with repetition) such that
PM (λ) = (λ − λ1)(λ − λ2) · · · (λ − λn), PM (λi) = 0 159
The eigenvalues λi of M are exactly the roots of PM (λ). These eigenvalues could be real or complex or zero, and they need not all be different. The number of times that any given root λi appears in the collection of eigenvalues is called its multiplicity.
Example Let L be the linear transformation L: R3 → R3 given by L(x, y, z) = (2x + y − z, x + 2y − z, −x − y + 2z) .
The matrix M representing L has columns Lei for each i, so:
x 2 1 −1x L
y→ 1 2 −1y. z −1−12z
Then the characteristic polynomial of L is13
λ−2−1 1 PM(λ)=det−1 λ−2 1
11λ−2
= (λ−2)[(λ−2)2 −1]+[−(λ−2)−1]+[−(λ−2)−1] = (λ−1)2(λ−4)
Then L has eigenvalues λ1 = 1 (with multiplicity 2), and λ2 = 4 (with multiplicity 1). To find the eigenvectors associated to each eigenvalue, we solve the homogeneous
system (M − λiI)X = 0 for each i.
13It is often easier (and equivalent) to solve det(M − λI) = 0.
160
λ = 4:
We set up the augmented matrix for the linear system:
−2 1 −1 0 1 −2 −1 0 1 −2−10∼0−3−30 −1 −1 −2 0 0 −3 −3 0
1 0 1 0 ∼ 0 1 1 0.
0000
So we see that z = t, y = −t, and x = −t gives a formula for eigenvectors in −1
terms of the free parameter t. Any such eigenvector is of the form t −1; 1
thus L leaves a line through the origin invariant.
Again we set up an augmented matrix and find the solution set:
11−10 11−10 1 1 −1 0 ∼ 0 0 0 0.
−1 −1 1 0 0 0 0 0
Then the solution set has two free parameters, s and t, such that z = t, y = s, and x = −s + t. Then L leaves invariant the set:
−1 1
s 1 +t0s,t∈R . 0 1
This set is a plane through the origin. So the multiplicity two eigenvalue has
−1 1
two independent eigenvectors, 1 and 0 that determine an invariant
01
λ = 1:
plane.
Example Let V be the vector space of smooth (i.e. infinitely differentiable) functions
f: R → R. Then the derivative is a linear operator d : V → V. What are the dx
eigenvectors of the derivative? In this case, we don’t have a matrix to work with, so we have to make do.
A function f is an eigenvector of d if there exists some number λ such that d f = dx dx
λf. An obvious candidate is the exponential function, eλx; indeed, d eλx = λeλx. dx
As such, the operator d has an eigenvector eλx for every λ ∈ R. dx
161
This is actually the whole collection of eigenvectors for d ; this can be proved dx
using the fact that every infinitely differentiable function has a Taylor series with
infinite radius of convergence, and then using the Taylor series to show that if two
functions are eigenvectors of d with eigenvalues λ, then they are scalar multiples of dx
each other.
19.1 Eigenspaces
−1 1
In the previous example, we found two eigenvectors 1 and 0 for L
01
−1 1 0
with eigenvalue 1. Notice that 1 + 0 = 1 is also an eigenvector
011
−1 1
of L with eigenvalue 1. In fact, any linear combination r 1 + s 0 of 01
these two eigenvectors will be another eigenvector with the same eigenvalue. More generally, let {v1, v2, . . .} be eigenvectors of some linear transforma- tion L with the same eigenvalue λ. A linear combination of the vi can be
written c1v1 + c2v2 + · · · for some constants {c1, c2, . . .}. Then:
L(c1v1 +c2v2 +···) = c1Lv1 +c2Lv2 +··· by linearity of L = c1λv1+c2λv2+···sinceLvi=λvi
= λ(c1v1 +c2v2 +···).
So every linear combination of the vi is an eigenvector of L with the same eigenvalue λ. In simple terms, any sum of eigenvectors is again an eigenvector if they share the same eigenvalue.
The space of all vectors with eigenvalue λ is called an eigenspace. It is, in fact, a vector space contained within the larger vector space V : It contains 0V , since L0V = 0V = λ0V , and is closed under addition and scalar multiplication by the above calculation. All other vector space properties are inherited from the fact that V itself is a vector space.
An eigenspace is an example of a subspace of V , a notion explored in Lecture 15.
More on eigenspaces
162
Reading homework: problem 19.1
You are now ready to attempt the second sample midterm.
References
Hefferon, Chapter Three, Section III.1: Representing Linear Maps with Ma- trices
Hefferon, Chapter Five, Section II.3: Eigenvalues and Eigenvectors
Beezer, Chapter E, Section EE
Wikipedia:
Eigen*
Characteristic Polynomial
Linear Transformations (and matrices thereof)
Review Problems
1. Explain why the characteristic polynomial of an n×n matrix has degree n. Make your explanation easy to read by starting with some simple examples, and then use properties of the determinant to give a general explanation.
2. Compute the characteristic polynomial PM(λ) of the matrix M = a b
c d . Now, since we can evaluate polynomials on square matrices,
we can plug M into its characteristic polynomial and find the matrix PM(M). What do you find from this computation? Does something similar hold for 3 × 3 matrices? What about n × n matrices?
163
3. Discrete dynamical system. Let M be the matrix given by 3 2
M=23. x(0)
Given any vector v(0) = y(0) , we can create an infinite sequence of vectors v(1), v(2), v(3), and so on using the rule
v(t + 1) = Mv(t) for all natural numbers t.
(This is known as a discrete dynamical system whose initial condition
is v(0).)
(a) Find all eigenvectors and eigenvalues of M.
(b) Find all vectors v(0) such that
v(0) = v(1) = v(2) = v(3) = · · ·
(Such a vector is known as a fixed point of the dynamical system.)
(c) Find all vectors v(0) such that v(0), v(1), v(2), v(3), . . . all point in the same direction. (Any such vector describes an invariant curve of the dynamical system.)
Hint
164
20 Diagonalization
Given a linear transformation, we are interested in how to write it as a matrix. We are especially interested in the case that the matrix is written with respect to a basis of eigenvectors, in which case it is a particularly nice matrix.
20.1 Diagonalization
Nowsupposewearelucky,andwehaveL:V →V,andthebasis{v1,…,vn} is a set of linearly independent eigenvectors for L, with eigenvalues λ1, . . . , λn. Then:
L(v1) = λ1v1 L(v2) = λ2v2
.
L(vn) = λnvn
As a result, the matrix of L in the basis of eigenvectors is diagonal: λ1
λ2 .. , .
λn
where all entries off of the diagonal are zero.
Suppose that V is any n-dimensional vector space. We call a linear trans-
formation L: V → V diagonalizable if there exists a collection of n linearly independent eigenvectors for L. In other words, L is diagonalizable if there exists a basis for V of eigenvectors for L.
In a basis of eigenvectors, the matrix of a linear transformation is diag- onal. On the other hand, if an n × n matrix is diagonal, then the standard basis vectors ei must already be a set of n linearly independent eigenvectors. We have shown:
165
Theorem 20.1. Given a basis S for a vector space V and a linear transfor- mation L: V → V , then the matrix for L in the basis S is diagonal if and only if S is a basis of eigenvectors for L.
Non-diagonalizable example
Reading homework: problem 20.2
20.2 Change of Basis
Suppose we have two bases S = {v1,…,vn} and T = {u1,…,un} for a vector space V . (Here vi and ui are vectors, not components of vectors in a basis!) Then we may write each vi uniquely as a linear combination of the uj :
or in a matrix notation
v j = u i P ji , i
P1 P21 ··· Pn1 P2 P2
12 v1 v2 ··· vn = u1 u2 ··· un . ..
P1n ··· Pn
Here, the Pji are constants, which we can regard as entries of a square ma- trix P = (Pji). The matrix P must have an inverse, since we can also write each ui uniquely as a linear combination of the vj:
Then we can write:
u j = v k Q kj . k
v j = v k Q kj P j i . ki
But i Qkj Pji is the k, j entry of the product of the matrices QP . Since the only expression for vj in the basis S is vj itself, then QP fixes each vj. As a result, each vj is an eigenvector for QP with eigenvalue 1, so QP is the identity.
166
The matrix P is called a change of basis matrix. There is a quick and dirty trick to obtain it: Look at the formula above relating the new basis vectors v1,v2,…vn to the old ones u1,u2,…,un. In particular focus on v1 for which
P1 P2
1 v1=u1 u2 ···un..
P1n
This says that the first column of the change of basis matrix P is really just
the components of the vector v1 in the basis u1, u2, . . . , un.
Example Suppose the vectors v1 and v2 form a basis for a vector space V and with
respect to some other basis u1,u2 have, respectively, components √1 √1
2 and 3 . √1 − √1
23
What is the change of basis matrix P from the old basis u1, u2 to the new basis v1, v2?
Before answering note that the above statements mean
√1 u + u √1 u − u
v = u u 2 = 1√ 2 and v = u u 3 = 1√ 2 .
1 1 2 √1 2 2 1 2 −√1 3 23
The change of basis matrix has as its columns just the components of v1 and v2, so is just
√1 √1
P=23. √1 −√1
23
Changing basis changes the matrix of a linear transformation. However, as a map between vector spaces, the linear transformation is the same no matter which basis we use. Linear transformations are the actual objects of study of this course, not matrices; matrices are merely a convenient way of doing computations.
Worked Change of Basis Example
167
Lets now apply this to our eigenvector problem. To wit, suppose L: V → V has matrix M = (Mji) in the basis T = {u1,…,un}, so
L ( u i ) = u k M ik . k
Now, let S = {v1, . . . , vn} be a basis of eigenvectors for L, with eigenvalues λ1,…,λn. Then
L(vi) = λivi = vkDik k
where D is the diagonal matrix whose diagonal entries Dk are the eigenval- λ1
λ2
ues λk; ie, D = .. . Let P be the change of basis matrix
. λn
from the basis T to the basis S. Then:
L(vj) = L uiPji = L(ui)Pji = ukMikPji. iiik
Meanwhile, we have:
L(vi) = vkDik = ujPkjDik.
kkj
Since the expression for a vector in a basis is unique, then we see that the entries of MP are the same as the entries of PD. In other words, we see that
MP = PD or D = P−1MP. This motivates the following definition:
Definition A matrix M is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that
D = P−1MP. We can summarize as follows:
Change of basis multiplies vectors by the change of basis matrix P , to give vectors in the new basis.
168
To get the matrix of a linear transformation in the new basis, we con- jugate the matrix of L by the change of basis matrix: M → P−1MP.
If for two matrices N and M there exists an invertible matrix P such that M = P−1NP, then we say that M and N are similar. Then the above discussion shows that diagonalizable matrices are similar to diagonal matrices.
Corollary 20.2. A square matrix M is diagonalizable if and only if there exists a basis of eigenvectors for M. Moreover, these eigenvectors are the columns of the change of basis matrix P which diagonalizes M.
Reading homework: problem 20.3
Example Let’s try to diagonalize the matrix
−14 −28 −44 M=−7 −14 −23.
9 18 29
The eigenvalues of M are determined by
det(M − λ) = −λ3 + λ2 + 2λ = 0.
So the eigenvalues of M are −1, 0, and 2, and associated eigenvectors turn out to be −8 −2 −1
v1 = −1,v2 = 1 , and v3 = −1. In order for M to be diagonalizable, 301
we need the vectors v1, v2, v3 to be linearly independent. Notice that the matrix
−8 −2 −1 P =v1 v2 v3=−1 1 −1
301
is invertible because its determinant is −1. Therefore, the eigenvectors of M form a basis of R, and so M is diagonalizable. Moreover, the matrix P of eigenvectors is a change of basis matrix which diagonalizes M:
−1 0 0 P−1MP=0 0 0.
002
169
2×2 Example
As a reminder, here is the key result of this Lecture
References
Hefferon, Chapter Three, Section V: Change of Basis Beezer, Chapter E, Section SD
Beezer, Chapter R, Sections MR-CB
Wikipedia:
Change of Basis
Diagonalizable Matrix Similar Matrix
Review Problems
1. Let Pn(t) be the vector space of polynomials of degree n or less, and d : Pn(t) → Pn−1(t) be the derivative operator. Find the matrix of d
dt dt in the bases {1,t,…,tn} for Pn(t) and {1,t,…,tn−1} for Pn−1(t).
Recall that the derivative operator is linear from Chapter 7.
2. When writing a matrix for a linear transformation, we have seen that
the choice of basis matters. In fact, even the order of the basis matters! 170
Write all possible reorderings of the standard basis {e1,e2,e3} for R3.
Write each change of basis matrix between the standard basis {e1, e2, e3} and each of its reorderings. Make as many observations as you can about these matrices: what are their entries? Do you notice anything about how many of each type of entry appears in each row and column? What are their determinants? (Note: These matrices are known as permutation matrices.)
GiventhelineartransformationL(x,y,z)=(2y−z,3x,2z+x+y), write the matrix M for L in the standard basis, and two other reorderings of the standard basis. How are these matrices related?
a b
3. When is the 2 × 2 matrix c d diagonalizable? Include examples in
your answer.
4. Show that similarity of matrices is an equivalence relation. (The defi-
nition of an equivalence relation is given in Homework 0.) 5. Jordan form
λ 1
Can the matrix 0 λ be diagonalized? Either diagonalize it or
explain why this is impossible.
λ 1 0
Can the matrix 0 λ 1 be diagonalized? Either diagonalize
00λ
it or explain why this is impossible.
Can the n × n matrix
0 0 λ ··· 0 0 . . . .. . .
be diagonalized?
λ 1 0 ··· 0 0 0 λ 1 ··· 0 0
… … 0 0 0 · · · λ 1 000···0λ
Either diagonalize it or explain why this is impossible.
Note: It turns out that every matrix is similar to a block ma- trix whose diagonal blocks look like diagonal matrices or the ones above and whose off-diagonal blocks are all zero. This is called
171
the Jordan form of the matrix and a (maximal) block that look
like
λ 1 0 0 0 λ 1 0
.. .. .. . ….
000λ
is called a Jordan n-cell or a Jordan block where n is the size of
the block.
6. Let A and B be commuting matrices (i.e. AB = BA) and suppose that A has an eigenvector v with eigenvalue λ. Show that Bv also has an eigenvalue of λ. Additionally suppose that A is diagonalizable with distinct eigenvalues. Show that v is also an eigenvector of B, and thus showing A and B can be simultaneously diagonalized (i.e. they have the same eigenvalues and eigenvectors).
172
21 Orthonormal Bases
You may have noticed that we have only rarely used the dot product. That is because many of the results we have obtained do not require a preferred notion of lengths of vectors. Now let us consider the case of Rn where the lengthofavector(x1,x2,…,xn)∈Rn is(x1)2+(x2)2+···(xn)2.
The canonical/standard basis in Rn
1 0
e1 = ., .
0 1
e2 = ., .
…,
0 0
en = . .
001 has many useful properties.
Each of the standard basis vectors has unit length: ∥ei∥=√ei ei =eTi ei =1.
The standard basis vectors are orthogonal (in other words, at right angles or perpendicular).
ei ej =eTi ej =0wheni̸=j T1i=j
This is summarized by
ei ej = δij = 0 i ̸= j ,
where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix.
Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on Rn. We can also form the outer product vwT , which gives a square matrix.
173
The outer product on the standard basis vectors is interesting. Set
0
1 0 ··· 0
0 0 ··· 0
= . . . .
00···0 .
1
0 0 ··· 0
0 0 ··· 0
= . . . .
00···1
In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else. 14
e1eT1 1
Π1 =
= .1 0 ··· 0
0 .
eneTn 0
Πn =
= .0 0 ··· 1
Notice that ΠiΠj = eieTi ejeTj = eiδijeTj . Then: Πi i = j
0 .
ΠiΠj = 0 i̸=j .
Moreover, for a diagonal matrix D with diagonal entries λ1, . . . , λn, we
can write
D = λ1Π1 + · · · + λnΠn.
14This is reminiscent of an older notation, where vectors are written in juxtaposition.
This is called a “dyadic tensor”, and is still used in some applications. 174
Other bases that share these properties should behave in many of the same ways as the standard basis. As such, we will study:
Orthogonal bases {v1, . . . , vn}:
vi vj =0ifi̸=j
In other words, all vectors in the basis are perpendicular. Orthonormal bases {u1, . . . , un}:
ui uj=δij.
In addition to being orthogonal, each vector has unit length.
Suppose T = {u1, . . . , un} is an orthonormal basis for Rn. Since T is a basis, we can write any vector v uniquely as a linear combination of the vectors in T:
v = c1u1 + · · · cnun.
Since T is orthonormal, there is a very easy way to find the coefficients of this linear combination. By taking the dot product of v with any of the vectors in T, we get:
v ui
⇒ci ⇒v
= c1u1 ui +···+ciui ui +···+cnun ui = c1 ·0+···+ci ·1+···+cn ·0
= ci,
= v ui
= (v u1)u1+···+(v un)un = (v ui)ui.
i
This proves the theorem:
Theorem 21.1. For an orthonormal basis {u1, . . . , un}, any vector v can be expressed as
v = (v ui)ui. i
Reading homework: problem 21.1
All orthonormal bases for R2 175
21.1 Relating Orthonormal Bases
Suppose T = {u1,…,un} and R = {w1,…,wn} are two orthonormal bases for Rn. Then:
w1 = (w1 u1)u1 +···+(w1 un)un .
wn = (wn u1)u1 +···+(wn un)un ⇒wi = uj(uj wi)
j
As such, the matrix for the change of basis from T to R is given by P=(Pij)=(uj wi).
Consider the product P P T in this case.
(PPT)jk = (uj wi)(wi uk)
i
= (uTj wi)(wiTuk) i
=uT (wwT)u
jiik i
= u Tj I n u k ( ∗ ) = u Tj u k = δ j k .
The equality (∗) is explained below. So assuming (∗) holds, we have shown that PPT = In, which implies that
PT =P−1.
The equality in the line (∗) says that i wiwiT = In. To see this, we
examine i wiwiT v for an arbitrary vector v. We can find constants cj
176
such that v = j cjwj, so that:
wiwiT v = wiwiT cjwj iij
= cjwiwiTwj ji
= cjwiδij ji
= cjwj sincealltermswithi̸=jvanish j
= v.
Then as a linear transformation, i wiwiT = In fixes every vector, and thus
must be the identity In.
Definition A matrix P is orthogonal if P −1 = P T .
Then to summarize,
Theorem 21.2. A change of basis matrix P relating two orthonormal bases is an orthogonal matrix. I.e.,
P−1 =PT.
Reading homework: problem 21.2
Example Consider R3 with the orthonormal basis
√2 0 √1
6 3
1 1 −1 √√√
S= u1= 6,u2= 2,u3= 3 . −1 1 1
√√√
623
Let R be the standard basis {e1,e2,e3}. Since we are changing from the standard basis to a new basis, then the columns of the change of basis matrix are exactly the images of the standard basis vectors. Then the change of basis matrix from R to S is
177
given by:
e1 u1 e1 u2 e1 u3 P =(Pij)=(ejui) = e2 u1 e2 u2 e2 u3
e3u1 e3u2 e3u3 √2 0√1
63
1 1 −1 √√√
=u1u2u3 =6 2 3.
From our theorem, we observe that:
u T1
P − 1 = P T = u T2 uT3
2 1 −1 √√√
−1 1 1 √√√
623
666 0 √1 √1
=22. 1 −1 1
333
We can check that P T P = I by a lengthy computation, or more simply, notice that
(PTP)ij = =
u T1
uT2u1 u2 u3
uT3
1 0 0 0 1 0.
001
We are using orthonormality of the ui for the matrix multiplication above. It is very important to realize that the columns of an orthogonal matrix are made from an orthonormal set of vectors.
Orthonormal Change of Basis and Diagonal Matrices. Suppose D is a diagonal matrix, and we use an orthogonal matrix P to change to a new basis. Then the matrix M of D in the new basis is:
M = P DP −1 = P DP T .
178
√√√
Now we calculate the transpose of M.
MT = (PDPT)T
= (PT)TDTPT = PDPT
=M So we see the matrix PDPT is symmetric!
References
Hefferon, Chapter Three, Section V: Change of Basis Beezer, Chapter V, Section O, Subsection N
Beezer, Chapter VS, Section B, Subsection OBC Wikipedia:
Orthogonal Matrix
Diagonalizable Matrix Similar Matrix
Review Problems
λ1 0 1.LetD= 0 λ .
2
(a) Write D in terms of the vectors e1 and e2, and their transposes.
a b
(b) Suppose P = c d is invertible. Show that D is similar to
1
M = ad−bc
λ1ad − λ2bc −(λ1 − λ2)ab (λ1 −λ2)cd −λ1bc+λ2ad .
(c) Suppose the vectors a
you say about M in this case? (Hint: think about what MT is equal to.)
179
b and c d are orthogonal. What can
2. Suppose S = {v1,…,vn} is an orthogonal (not orthonormal) basis for Rn. Then we can write any vector v as v = i civi for some constants ci. Find a formula for the constants ci in terms of v and the vectors in S.
Hint for 2
3. Let u, v be independent vectors in R3, and P = span{u, v} be the plane
spanned by u and v.
(a) Isthevectorv⊥ =v−u·vuintheplaneP?
u·u
(b) What is the angle between v⊥ and u?
(c) Given your solution to the above, how can you find a third vector perpendicular to both u and v⊥?
(d) Construct an orthonormal basis for R3 from u and v.
(e) Test your abstract formulae starting with
u=1 2 0 andv=0 1 1. Hint for 3
180
22 Gram-Schmidt and Orthogonal Comple- ments
Given a vector u and some other vector v not in the span of u, we can
construct a new vector:
u
v⊥ =v−u·vu. u·u
v
Hence, {u,v⊥} is an orthogonal basis for span{u,v}. When v is not par- allel to u, v⊥ ̸= 0, and normalizing these vectors we obtain u , v⊥ , an
orthonormal basis.
Sometimes we write v = v⊥ + v∥ where:
v⊥ = v−u·vu u·u
v∥ = u·vu. u·u
This is called an orthogonal decomposition because we have decomposed v into a sum of orthogonal vectors. It is significant that we wrote this decom- position with u in mind; v∥ is parallel to u.
If u, v are linearly independent vectors in R3, then the set {u, v⊥, u × v⊥} would be an orthogonal basis for R3. This set could then be normalized by dividing each vector by its length to obtain an orthonormal basis.
181
u·v u=v∥ u·u
v⊥
This new vector v⊥ is orthogonal to u because
u v⊥=u v−u·vu u=0. u·u
|u| |v⊥|
However, it often occurs that we are interested in vector spaces with di- mension greater than 3, and must resort to craftier means than cross products to obtain an orthogonal basis. 15
Given a third vector w, we should first check that w does not lie in the span of u and v, i.e. check that u,v and w are linearly independent. We
then can define:
w⊥=w−uwu−v⊥ wv⊥. uu v⊥v⊥
We can check that u w⊥ and v⊥ w⊥ are both zero: ⊥ uwv⊥w⊥
uw=u w−uuu−v⊥v⊥v =uw−uwuu−v⊥ wuv⊥
v⊥ v⊥
uu v⊥v⊥ =uw−uw−v⊥ wuv⊥ =0
since u is orthogonal to v⊥, and
⊥ ⊥ ⊥ uw v⊥w⊥
vw=v w−uuu−v⊥v⊥v
=v⊥ w−uwv⊥ u−v⊥ wv⊥ v⊥
uu v⊥v⊥ =v⊥ w−uwv⊥ u−v⊥ w=0
uu
because u is orthogonal to v⊥. Since w⊥ is orthogonal to both u and v⊥, we have that {u, v⊥, w⊥} is an orthogonal basis for span{u, v, w}.
In fact, given a collection {x, v2, . . .} of linearly independent vectors, we can produce an orthogonal basis for span{v1, v2, . . .} consisting of the follow-
15Actually, given a set T of (n − 1) independent vectors in n-space, one can define an analogue of the cross product that will produce a vector orthogonal to the span of T , using a method exactly analogous to the usual computation for calculating the cross product of two vectors in R3. This only gets us the last orthogonal vector, though; the process in this Section gives a way to get a full orthogonal basis.
182
ing vectors:
v1⊥ = v1
v2⊥ =v2−v1⊥·v2v1⊥ v 1⊥ · v 1⊥
v3⊥ = v3−v1⊥·v3 v1⊥−v2⊥·v3 v2⊥ v1⊥ · v1⊥ v2⊥ · v2⊥
.
vi⊥ =vi−vj⊥·vivj⊥
jj
. 1
xn (xn)2 ··· (xn )n−1
(Here stands for a multiple product, just like Σ stands for a multiple
sum.) 8. …
9. (a)
Hence the eigenvalues are 0, −1, 23 .
(b) When λ = 0 we must solve the homogenous system
λ −21 −1 111
1 1 1 λ 1 1 det−2 λ−2 −2=λ (λ−2 λ−4)+2 −2−2 − −4+λ
−1−21 λ
= λ3 − 12 λ2 − 32 λ = λ(λ + 1)(λ − 32 ) .
0 12 1 0 1 12 0 0 1 0 − 1 0 111 11
2 2 2 0 ∼ 0 4 2 0 ∼ 0 1 2 0 . 1 12 0 0 0 12 1 0 0 0 0 0
s
So we find the eigenvector −2s where s ̸= 0 is arbitrary.
For λ = −1
s
1 12 1 0 1 0 1 0 131
2 2 2 0 ∼ 0 1 0 0 . 1 12 1 0 0 0 0 0
246
−s
So we find the eigenvector 0 where s ̸= 0 is arbitrary.
s
Finally, for λ = 23
− 23 12 1 0 1
21 − 32 0 1 0 − 1 0 1155
2 − 1 2 0 ∼ 0 − 4 4 0 ∼ 0 1 − 1 0 . 1 21 −32 0 0 45 −54 0 0 0 0 0
s
So we find the eigenvector s where s ̸= 0 is arbitrary.
s
1
If the mistake X is in the direction of the eigenvector −2, then 1
Y = 0. I.e., the satellite returns to the origin O. For all subsequent orbits it will again return to the origin. NASA would be very pleased in this case.
−1
If the mistake X is in the direction 0 , then Y = −X. Hence the
1
satellite will move to the point opposite to X. After next orbit will move
back to X. It will continue this wobbling motion indefinitely. Since this is a stable situation, again, the elite engineers will pat themselves on the back.
1
Finally, if the mistake X is in the direction 1 , the satellite will 1
move to a point Y = 32X which is further away from the origin. The same will happen for all subsequent orbits, with the satellite moving a factor 3/2 further away from O each orbit (in reality, after several orbits, the approximations used by the engineers in their calculations probably fail and a new computation will be needed). In this case, the satellite will be lost in outer space and the engineers will likely lose their jobs!
247
10. (a)
(b) (c) (d) (e)
3100
A basis for B is 0,1,0 001
3.
23 = 8.
dimB3 = 3.
Because the vectors {v1,v2,v3} are a basis any element v ∈ B3 can be written uniquely as v = b1v1 + b2v2 + b3v3 for some triplet
b1
of bits b2. Hence, to compute L(v) we use linearity of L
b3
L(v) = L(b1v1 + b2v2 + b3v3) = b1L(v1) + b2L(v2) + b3L(v3)
b1 = L(v1) L(v2) L(v3) b2 .
b3
From the notation of the previous part, we see that we can list linear transformations L : B3 → B by writing out all possible bit-valued row vectors
0 0 0, 1 0 0, 0 1 0, 0 0 1, 1 1 0, 1 0 1, 0 1 1,
1 1 1.
There are 23 = 8 different linear transformations L : B3 → B,
exactly the same as the number of elements in B3.
Yes, essentially just because L1 and L2 are linear transformations. In detail for any bits (a,b) and vectors (u,v) in B3 it is easy to check the linearity property for (αL1 + βL2)
(αL1 + βL2)(au + bv) = αL1(au + bv) + βL2(au + bv) 248
(f)
(g)
(h)
11. … 12. (a)
(b)
= αaL1(u) + αbL1(v) + βaL1(u) + βbL1(v) = a(αL1(u) + βL2(v)) + b(αL1(u) + βL2(v)) = a(αL1 + βL2)(u) + b(αL1 + βL2)(v) .
Here the first line used the definition of (αL1 + βL2), the second line depended on the linearity of L1 and L2, the third line was just algebra and the fourth used the definition of (αL1 + βL2) again.
Yes. The easiest way to see this is the identification above of these maps with bit-valued column vectors. In that notation, a basis is
1 0 0,0 1 0,0 0 1.
Since this (spanning) set has three (linearly independent) ele- ments, the vector space of linear maps B3 → B has dimension 3. This is an example of a general notion called the dual vector space.
abT 2 2 IfwecallM = b d ,thenX MX =ax +2bxy+dy . Similarly
c T T
puttingC = e yieldsX C+C X =2X C =2cx+2ey. Thus
0=ax2 +2bxy+dy2 +2cx+2ey+f
a bx c x
=xybdy+xye+cey+f.
Yes, the matrix M is symmetric, so it will have a basis of eigen- vectors and is similar to a diagonal matrix of real eigenvalues.
a−λ b
To find the eigenvalues notice that det b d − λ = (a −
λ)(d−λ)−b2 = λ− a+d2 −b2 −a−d2. So the eigenvalues are 22
a+d a−d2 a+d a−d2 λ = + b2 + and μ = − b2 + .
2222
249
(c) The trick is to write
XT MX+CT X+XT C = (XT +CT M−1)M(X+M−1C)−CT M−1C , so that
(XT +CTM−1)M(X+M−1C)=CTMC−f. Hence Y = X + M−1C and g = CT MC − f.
(d) The cosine of the angle between vectors V and W is given by
√VW=√VTW.
V VW W VTVWTW
So replacing V → PV and W → PW will always give a factor P T P inside all the products, but P T P = I for orthogonal matri- ces. Hence none of the dot products in the above formula changes, so neither does the angle between V and W.
(e) If we take the eigenvectors of M, normalize them (i.e. divide them by their lengths), and put them in a matrix P (as columns) then P will be an orthogonal matrix. (If it happens that λ = μ, then we also need to make sure the eigenvectors spanning the two dimensional eigenspace corresponding to λ are orthogonal.) Then, since M times the eigenvectors yields just the eigenvectors back again multiplied by their eigenvalues, it follows that MP = PD where D is the diagonal matrix made from eigenvalues.
(f) If Y = PZ, then YTMY = ZTPTMPZ = ZTPTPDZ = ZTDZ λ 0
whereD= 0 μ .
(g) Using part (f) and (c) we have
λz2 + μw2 = g .
(h) When λ = μ and g/λ = R2, we get the equation for a circle radius R in the (z,w)-plane. When λ, μ and g are postive, we have the equation for an ellipse. Vanishing g along with λ and μ of opposite signs gives a pair of straight lines. When g is non-vanishing, but λ and μ have opposite signs, the result is a pair of hyperbolæ. These shapes all come from cutting a cone with a plane, and are therefore called conic sections.
250
13. We show that L is bijective if and only if M is invertible. (a) We suppose that L is bijective.
i. Since L is injective, its kernel consists of the zero vector alone. Hence
L = dim ker L = 0. So by the Dimension Formula,
dimV =L+rankL=rankL. Since L is surjective, L(V ) = W. Thus
Thereby
rank L = dim L(V ) = dim W. dimV = rankL = dimW.
ii. Since dimV = dimW, the matrix M is square so we can talk about its eigenvalues. Since L is injective, its kernel is the zero vector alone. That is, the only solution to LX = 0 is X = 0V . But LX is the same as MX, so the only solution to MX = 0 is X = 0V . So M does not have zero as an eigenvalue.
iii. Since MX = 0 has no non-zero solutions, the matrix M is invertible.
(b) Now we suppose that M is an invertible matrix.
i. Since M is invertible, the system MX = 0 has no non-zero solutions. But LX is the same as MX, so the only solution to LX = 0 is X = 0V . So L does not have zero as an eigenvalue.
ii. Since LX = 0 has no non-zero solutions, the kernel of L is the zero vector alone. So L is injective.
iii. Since M is invertible, we must have that dim V = dim W . By the Dimension Formula, we have
dimV =L+rankL
and since ker L = {0V } we have L = dim ker L = 0, so
dimW =dimV =rankL=dimL(V). 251
14. (a) (b)
(c)
(d)
(e)
Since L(V ) is a subspace of W with the same dimension as W , it must be equal to W. To see why, pick a basis B of L(V). Each element of B is a vector in W , so the elements of B form a linearly independent set in W . Therefore B is a basis of W , sincethesizeofBisequaltodimW. SoL(V)=spanB=W. So L is surjective.
F4 =F2 +F3 =2+3=5.
The number of pairs of doves in any given year equals the number
of the previous years plus those that hatch and there are as many
of them as pairs of doves in the year before the previous year. F1 1 F2 1
(f)
Mn = (PDP−1)n = PDP−1PDP−1 …PDP−1 = PDnP−1. 252
X1 = F
= 0 and X2 = F = 1 . 01
1 11 1 MX1=10 0=1=X2.
We just need to use the recursion relationship of part (b) in the top slot of Xn+1:
Fn+1 Fn +Fn−1 1 1 Fn Xn+1=F=F =10F=MXn.
n n n−1 Notice M is symmetric so this is guaranteed to work.
1−λ 1 12 5 det 1 −λ =λ(λ−1)−1= λ−2 −4,
√ 1±√5
so the eigenvalues are 1± 5 . Hence the eigenvectors are 2 21
, respectively (notice that 1+ 5 +1 = 1+ 5.1+ 5 and 1− 5 +1 =
√√√√ √√2222
1− 5.1− 5).ThusM=PDP−1with 22
1+√5 0 1+√5 1−√5 D= 2 √ andP= 2 2 .
0 1− 5 1 1 2
(g) Just use the matrix recursion relation of part (d) repeatedly: Xn+1 =MXn =M2Xn−1 =···=MnX1.
√√ (h) Theeigenvaluesareφ=1+ 5 and1−φ=1− 5.
(i)
22
Fn+1 n n−1 Xn+1= F =MXn=PDP X1
n
φ0n√1 ⋆1 φn 0√1
=P5=P5 01−φ−√1⋆0 0(1−φ)n−√1
55
√ √φn⋆ 1+51−5 √
=225n=nn. 1 1 −(1−φ) φ −(1−φ)
55
φn −(1−φ)n Fn= √5 .
√√
Hence
These are the famous Fibonacci numbers.
15. Call the three vectors u, v and w, respectively. Then 1
u = v − u = 1 , uu44
1 4
and
w =w−u uu−v⊥ v⊥v =w−4u−3v =0
4
1 Dividing by lengths, an orthonormal basis for span{u, v, w} is
⊥
−1 uw v⊥w⊥ 3 34⊥ 0
4
⊥ uv 3−34
v = v −
1 √3 √2
2 6 − √2 12 − 3 0
, √2 ,
13 0 26 √
√ 12 63 22
253
.
16. …
17. …
18. We show that L is bijective if and only if M is invertible.
(a) We suppose that L is bijective.
i. Since L is injective, its kernel consists of the zero vector alone.
So
L = dim ker L = 0. So by the Dimension Formula,
dimV =L+rankL=rankL. Since L is surjective, L(V ) = W. So
So
rank L = dim L(V ) = dim W. dimV = rankL = dimW.
ii. Since dimV = dimW, the matrix M is square so we can talk about its eigenvalues. Since L is injective, its kernel is the zero vector alone. That is, the only solution to LX = 0 is X = 0V . But LX is the same as MX, so the only solution to MX = 0 is X = 0V . So M does not have zero as an eigenvalue.
iii. Since MX = 0 has no non-zero solutions, the matrix M is invertible.
(b) Now we suppose that M is an invertible matrix.
i. Since M is invertible, the system MX = 0 has no non-zero solutions. But LX is the same as MX, so the only solution to LX = 0 is X = 0V . So L does not have zero as an eigenvalue.
ii. Since LX = 0 has no non-zero solutions, the kernel of L is the zero vector alone. So L is injective.
iii. Since M is invertible, we must have that dim V = dim W . By the Dimension Formula, we have
dimV =L+rankL 254
19. …
and since ker L = {0V } we have L = dim ker L = 0, so dimW =dimV =rankL=dimL(V).
Since L(V ) is a subspace of W with the same dimension as W , it must be equal to W. To see why, pick a basis B of L(V). Each element of B is a vector in W , so the elements of B form a linearly independent set in W . Therefore B is a basis of W , sincethesizeofBisequaltodimW. SoL(V)=spanB=W. So L is surjective.
255
D Points Vs. Vectors
This is an expanded explanation of this remark. People might interchange- ably use the term point and vector in Rn, however these are not quite the same concept. There is a notion of a point in Rn representing a vector, and while we can do this in a purely formal (mathematical) sense, we really can- not add two points together (there is the related notion of this using convex combinations, but that is for a different course) or scale a point. We can “subtract” two points which gives us the vector between as done which de- scribing choosing the origin, thus if we take any point P, we can represent it as a vector (based at the origin O) by taking v = P − O. Naturally (as we should be able to) we can add vectors to points and get a point back.
To make all of this mathematically (and computationally) rigorous, we “lift” Rn up to Rn+1 (sometimes written as Rn) by stating that all tu- ples p = (p1,p2,…,pn,1) ∈ Rn+1 correspond to a point p ∈ Rn and v = (v1,…,vn,0) ∈ Rn+1 correspond to a vector v ∈ Rn. Note that if the last coordinate w is not 0 or 1, then it does not carry meaning in terms of Rn but just exists in a formal sense. However we can project it down to a point by scaling by w1 , and this concept is highly used in rendering computer graphics.
We also do a similar procedure for all matrices acting on Rn by the fol- lowing. Let A be a k × n matrix, then when we lift, we get the following (k+1)×(n+1) matrix
A 0 A= 0 1 .
Note that we keep the last coordinate fixed, so we move points to points and vectors to vectors. We can also act on Rn in a somewhat non-linear fashion by taking matrices of the form
∗ ∗ 01
and this still fixes the last coordinate. For example we can also represent a translation, which is non-linear since it moves the origin, in the direction v = (v1,v2,…,vn) by the following matrix
In v Tv= 0 1 .
where In is the n × n identity matrix. Note that this is an invertible matrix with determinant 1, and it is stronger, a translation is what is known an
256
isometry on Rn (note it is not an isometry on Rn+1), an operator where ∥Tvx∥ = ∥x∥ for all vectors x ∈ Rn.
A good exercises to try are to check that lifting R2 to R3 allows us to add, subtract, and scale points and vectors as described and generates nonsense when we can’t (i.e. adding two points gives us a 2 in the last coordinate, so it is neither a point nor a vector). Another good exercise is to describe all isometries of R2. As hint, you can get all of them by rotation about the origin, reflection about a single line, and translation.
257
E Abstract Concepts
Here we will introduce some abstract concepts which are mentioned or used in this book. This is material is more advanced but will be interesting to anybody wanting a deeper understanding of the underlying mathematical structures behind linear algebra. In all cases below, we assume that the given set is closed under the operation(s) introduced.
E.1 Dual Spaces
Definition A bounded operator is a linear operator φ: V → W such that
∥φv∥W ≤ C∥v∥V where C > 0 is a fixed constant.
Let V be a vector space over F, and a functional is a function φ: V → F.
Definition The dual space V ∗ of a vector space V is the vector space of all bounded linear functionals on V .
There is a natural basis {Λi} for V ∗ by Λi(ej ) = δij where {ej } is the canonical (standard) basis for V and δij is the Kronecker delta, which is 1 if i = j and 0 otherwise. Concretely for a finite dimensional vector space V , we can associate V ∗ with row vectors wT as a functional by the matrix multiplication wT v for vectors v ∈ V . Alternatively we can associated V ∗ with vectors in V as a functional by taking the usual dot product. So the basisforV∗ iseTi or⟨ei,v⟩forvectorsv∈V.
E.2 Groups
Definition A group is a set G with a single operation · which satisfies the
axioms:
Associativity(a·b)·c=a·(b·c)foralla,b,c∈G. There exists an identity 1 ∈ G.
There exists an inverse g−1 ∈ G for all g ∈ G.
Groups can be finite or infinite, and notice that not alls element in a group must commute (i.e., the order of multiplication can matter). Here are some examples of groups:
258
Non-zero real numbers under multiplication. All real numbers under addition.
All invertible n × n real matrices.
All n × n real matrices of determinant 1.
All permutations of [1, 2, . . . , n] under compositions.
Any vector space under addition.
Note that all real numbers under multiplication is not a group since 0 does
not have an inverse.
E.3 Fields
Definition A field F is a set with two operations + and · that for all a, b, c ∈
F the following axioms are satisfied:
A1. Additionisassociative(a+b)+c=a+(b+c). A2. There exists an additive identity 0.
A3. Addition is commutative a + b = b + a.
A4. There exists an additive inverse −a.
M1. Multiplication is associative (a · b) · c = a · (b · c). M2. There exists a multiplicative identity 1.
M3. Multiplication is commutative a · b = b · a.
M4. There exists a multiplicative inverse a−1 if a ̸= 0.
D. The distributive law holds a · (b + c) = ab + ac.
Roughly, all of the above mean that you have notions of +, −, × and ÷ just as for regular real numbers.
Fields are a very beautiful structure; some examples are Q, R, and C. We note that every one of the these examples are infinite, however this does not necessarily have to be the case. Let q ≥ 0 and let Zq be the set of remainders
259
of Z (the set of integers) by dividing by q. We say Zq is the set of all a moduloqora modqforshortora≡qwherea∈Z,andwedefineaddition and multiplication to be their usual counterparts in Z except we take the result mod q. So for example we have Z2 = {0,1} where 1+1 = 2 ≡ 0 (these are exactly the bits used in bit matrices) and Z3 = {0, 1, 2} with 1 + 1 = 2, 2·2 = 4 ≡ 1. Now if p is a prime number, then Zp is a field (often written as Zp). Clearly Z2 is a field, and from above for Z3 we have 2−1 = 2, so Z3 isalsoafield. ForZ5 wehave2−1 =3since2·3=6≡1and4−1 =4since 4·4 = 16 ≡ 1. Often when q = pn where p is a prime, then people will write Fq to reinforce that it is a field.
E.4 Rings
HoweverZ4 isnotafieldsince2·2=4≡0and2·3=6≡2. Similarly Z is not a field since 2 does not have a multiplicative inverse. These are known as rings. For rings all of the addition axioms hold, but none of the multiplicative ones must.
Definition A ring R is a set with two operations + and · that for all a, b, c ∈ R the following axioms are satisfied:
A1. Additionisassociative(a+b)+c=a+(b+c). A2. There exists an additive identity 0.
A3. Addition is commutative a + b = b + a.
A4. There exists an additive inverse −a.
D. The distributive law holds a·(b+c) = a·b+a·c and (a+b)·c = a·c+b·c.
Note that when we have axiom M3, then the two equations in axiom D are equivalent.
Clearly all fields are rings, but rings in general are not nearly as nice (for example, in Z4 two things can be multiplied together to give you 0). An important example of a ring is F[x], which is the ring of all polynomials in one variable x with coefficients in a field F. Recall that you can do everything you want in a field except divide polynomials, but if you take the modulus with respect to a polynomial which is not a product of two smaller polynomials, you can get a field. We call such polynomials irreducible. In other words,
260
you take a polynomial p and you set p ≡ 0, thus this is just making sure you don’t have ab ≡ 0. For example, the polynomial p(x) = x2 + 1 cannot be factored over R (i.e. with real coefficients), so what you get is actually the same field as C since we have x2 + 1 = 0 or perhaps more suggestively x2 = −1. This is what is known as a field extension; these are the central objects in Galois theory and are denoted F(α) where α is a root of p.
One final definition: We say that a field F has characteristic p if pi=1 1 ≡ 0 (i.e. we sum 1 together p times and return to 0). For example Z3 has characteristic 3 since 1 + 1 + 1 ≡ 0, and in general Zp has characteristic p.
A good exercise is to find an irreducible degree 2 polynomial p in Z2[x], and check that the field extension Z2(α) has 4 elements and has characteris- tic 2 (hence it is not actually Z4).
E.5 Algebras
Definition An algebra A is a vector space over F with the operation · such
that for all u, v, w ∈ A and α, β ∈ F, we have
D. Thedistributivelawholdsu·(v+w)=u·v+u·wand(u+v)·w=
u · w + v · w.
S. We have (αv) · (βw) = (αβ)(v · w).
Essentially an algebra is a ring that is also a vector space over some field. Or in simpler words, an algebra is a vector space where you can multiply vectors.
For example, all n×n real matrices Mn(R) is a ring but we can let scalars in R act on these matrices in their usual way. Another algebra is we can take Mn(R) but take scalars in C and just formally say iM is another element in this algebra. Another example is R3 where multiplication is the cross- product ×. We note that this is not associative nor commutative under × and that v × v = 0 (so there are in fact no multiplicative inverses), and there is no multiplicative identity. Lastly, recall that sln defined here is an algebra under [, ].
261
F Sine and Cosine as an Orthonormal Basis Definition Let Ω ⊆ Rn for some n. Let Lp0(Ω) denote the space of all
continuous functions f : Ω → R (or C) such that if p < ∞, then 1/p
|f(x)|p dx < ∞, Ω
otherwise |f(x)| < M for some fixed M and all x ∈ Ω.
Note that this is a vector space over R (or C) under addition (in fact it is an
algebra under pointwise multiplication) with norm (the length of the vector) 1/p
∥f∥p =
|f(x)|p dx .
Ω
For example, the space L10(R) is all absolutely integrable functions. However note that not every differentiable function is contained in Lp0(Ω); for example
we have
∞
|1|pdx= dx= lim x=∞.
R+ 0
x→∞
In particular, we can take S1, the unit circle in R2, and to turn this into a valid integral, take Ω = [0, 2π) and take functions f : [0, 2π] → R such that f(0) = f(2π) (or more generally for a periodic function f: R → R where f(x) = f(x + 2πn) for all n ∈ Z). Additionally we can define an inner product on H = L20(S1) by taking
2π 0
and note that ⟨f,f⟩ = ∥f∥2. So the natural question to ask is what is a good basis for H? The answer is sin(nx) and cos(nx) for all n ∈ Z≥0, and in fact, they are orthogonal. First note that
0
⟨f,g⟩ =
f(x)g(x)dx,
2π 0
⟨sin(mx), sin(nx)⟩ =
= 2 dx
sin(mx) sin(nx) dx
2π cos((m − n)x) − cos((m + n)x)
262
and if m ̸= n, then we have
⟨sin(mx), sin(nx)⟩ = −
sin((m + n)x)2π
However if m = n, then we have ⟨sin(mx), sin(mx)⟩ =
2π cos(2mx)
sin((m − n)x)2π
2(m−n) 0 2(m+n) 0
= 0.
0
so ∥sin(mx)∥2 = 2π, and similarly we have ∥cos(mx)∥2 = 2π. Finally
we have
⟨sin(mx), cos(nx)⟩ =
1 − 2 dx = 2π, √√
2π 0
sin(mx) cos(nx) dx
2π sin((m + n)x) + sin((m − n)x)
= 2 dx 0
cos((m + n)x)2π
= +
cos((m − n)x)2π
=0.
2(m+n) 0
2(m−n) 0
Now it is not immediately apparent that we haven’t missed some basis vec- tor, but this is a consequence of the Stone-Weierstrauss theorem. Now only appealing to linear algebra, we have that einx is a also basis for L2(S1) (only over C though) since
einx − e−inx einx + einx
sin(nx) = 2i , cos(nx) = is a linear change of basis.
2 , einx = cos(nx) + i sin(nx)
263
G Movie Scripts
The authors welcome your feedback on how useful these movies are for help- ing you learn. We also welcome suggestions for other movie themes. You might even like to try your hand at making your own!
G.1 Introductory Video
Three bears go into a cave, two come out.
Would you go in?
264
G.2 What is Linear Algebra: Overview
In this course, we start with linear systems
f1(x1,...,xm) = a1
. .
(3)
fn(x1,...,xm) = an,
and discuss how to solve them.
We end with the problem of finding a least squares fit---find
the line that best fits a given data set:
In equation (3) we have n linear functions called f1, . . . , fn, m unknowns x1,...,xm and n given constants a1,...,an. We need to say what it means for a function to be linear. In one variable, a linear function obeys the linearity property
f (a + b) = f (a) + f (b) . The solution to this is
f(x) = λx,
for some constant λ. The plot of this is just a straight line
through the origin with slope λ
265
We should also check that our solution obeys the linearity prop- erty. The logic is to start with the left hand side f(a + b) and try to turn it into the right hand side f(a) + f(b) using correct manipulations:
f (a + b) = λ(a + b) = λa + λb = f (a) + f (b) .
The first step here just plugs a+b into f(x), the second is the distributive property, and in the third we recognize that λa = f(a) and λb = f(b). This proves our claim.
For functions of many variables, linearity must hold for every slot. For a linear function of two variables f(x,y) this means
f (a + b, c + d) = f (a, c) + f (b, d) .
We finish with a question. The plot of f(x) = λx+β is a straight
line, but does it obey the linearity property?
266
G.3 What is Linear Algebra: 3 × 3 Matrix Example
Your friend places a jar on a table and tells you that there is 65 cents in this jar with 7 coins consisting of quarters, nickels, and dimes, and that there are twice as many dimes as quarters. Your friend wants to know how many nickels, dimes, and quarters are in the jar.
We can translate this into a system of the following linear equations:
5n+10d+25q = 65 n+d+q=7
d = 2q
Now we can rewrite the last equation in the form of −d + 2q = 0,
and thus express this problem as the matrix equation
5 10 25 n 65 1 1 1d=7.
0−12q 0
or as an augmented matrix (see also this script on the notation).
5 10 25 65 1 1 1 7 0−12 0
Now to solve it, using our original set of equations and by sub- stitution, we have
5n + 20q + 25q = 5n + 45q = 65 n + 2q + q = n + 3q = 7
and by subtracting 5 times the bottom equation from the top, we get
45q − 15q = 30q = 65 − 35 = 30
and hence q = 1. Clearly d = 2, and hence n = 7−2−1 = 4. Therefore
there are four nickels, two dimes, and one quarter.
267
G.4 What is Linear Algebra: Hint
Looking at the problem statement we find some important informa- tion, first that oranges always have twice as much sugar as ap- ples, and second that the information about the barrel is recorded as (s,f), where s= units of sugar in the barrel and f = number of pieces of fruit in the barrel.
We are asked to find a linear transformation relating this new representation to the one in the lecture, where in the lecture x = the number of apples and y = the number of oranges. This means we must create a system of equations relating the variable x and y to the variables s and f in matrix form. Your answer should be the matrix that transforms one set of variables into the other.
Hint: Let λ represent the amount of sugar in each apple.
1. To find the first equation find a way to relate f to the
variables x and y.
2. To find the second equation, use the hint to figure out how much sugar is in x apples, and y oranges in terms of λ. Then write an equation for s using x, y and λ.
268
G.5 Gaussian Elimination: Augmented Matrix Nota- tion
Why is the augmented matrix
1 1 27 2 −1 0 ,
equivalent to the system of equations
x+y = 27 2x−y = 0?
Well the augmented matrix is just a new notation for the matrix equation
1 1x 27 2 −1 y = 0
and if you review your matrix multiplication remember that
This means that
1 1x x+y 2 −1 y = 2x−y
x+y 27 2x−y = 0
Which is our original equation.
269
G.6 Gaussian Elimination: Equivalence of Augmented Matrices
Lets think about what it means for the two augmented matrices
and
to be equivalent?
1 1 27 2 −1 0 ,
109 0 1 18 ,
They are certainly not equal, because they don’t match in each component, but since these augmented matrices represent a system, we might want to introduce a new kind of equivalence relation.
Well we could look at the system of linear equations this rep- resents
x+y = 27 2x−y = 0?
and notice that the solution is x = 9 and y = 18. The other augmented matrix represents the system
x+0·y = 9 0·x+y = 18?
This which clearly has the same solution. The first and second system are related in the sense that their solutions are the same. Notice that it is really nice to have the augmented matrix in the second form, because the matrix multiplication can be done in your head.
270
G.7 Gaussian Elimination: Hints for Review Ques- tions 4 and 5
The hint for Review Question 4 is simple--just read the lecture on Elementary Row Operations.
Question 5 looks harder than it actually is:
Row equivalence of matrices is an example of an equivalence relation. Recall that a relation ∼ on a set of objects U is an equivalence relation if the following three properties are satisfied:
Reflexive: For any x∈U, we have x∼x.
Symmetric: For any x,y∈U, if x∼y then y∼x.
Transitive: For any x,y and z∈U, if x∼y and y∼z then x∼z.
(For a more complete discussion of equivalence relations, see
Webwork Homework 0, Problem 4)
Show that row equivalence of augmented matrices is an equivalence
relation.
Firstly remember that an equivalence relation is just a more general version of ‘‘equals’’. Here we defined row equivalence for augmented matrices whose linear systems have solutions by the property that their solutions are the same.
So this question is really about the word same. Lets do a silly example: Lets replace the set of augmented matrices by the set of people who have hair. We will call two people equivalent if they have the same hair color. There are three properties to check:
Reflexive: This just requires that you have the same hair color as yourself so obviously holds.
271
Symmetric: If the first person, Bob (say) has the same hair color as a second person Betty(say), then Bob has the same hair color as Betty, so this holds too.
Transitive: If Bob has the same hair color as Betty (say) and Betty has the same color as Brenda (say), then it follows that Bob and Brenda have the same hair color, so the transitive property holds too and we are done.
272
G.8 Gaussian Elimination: 3 × 3 Example
We’ll start with the matrix from the What is Linear Algebra: 3 × 3
Matrix Example which was
5 10 25 65 1 1 1 7,
0−12 0
and recall the solution to the problem was n=4, d=2, and q=1. So as a matrix equation we have
1 0 0n 4 0 1 0d = 2
001q1 or as an augmented matrix
1 4 1 2
11
Note that often in diagonal matrices people will either omit the zeros or write in a single large zero. Now the first matrix is equivalent to the second matrix and is written as
5 10 25 65 1 4 1 1 1 7,∼ 1 2 0−120 11
since they have the same solutions.
273
G.9 Elementary Row Operations: Example
We have three basic rules 1. Row Swap
2. Scalar Multiplication 3. Row Sum
Lets look at an example. The system
3x + y = 7 x + 2y = 4
is something we learned to solve in high school algebra. Now we can write it in augmented matrix for this way
317 124.
We can see what these operations allow us to do:
1. Row swap allows us to switch the order of rows. In this exam- ple there are only two equations, so I will switch them. This will work with a larger system as well, but you have to decide which equations to switch. So we get
x + 2y = 4 3x + y = 7
The augmented matrix looks like
124 317.
Notice that this won’t change the solution of the system, but the augmented matrix will look different. This is where we can say that the original augmented matrix is equivalent to the one with the rows swapped. This will work with a larger system as well, but you have to decide which equations, or rows to switch. Make sure that you don’t forget to switch the entries in the right-most column.
274
2. Scalar multiplication allows us to multiply both sides of an equation by a non-zero constant. So if we are starting with
x + 2y = 4 3x + y = 7
Then we can multiply the first equation by −3 which is a non- zero scalar. This operation will give us
−3x + −6y = −12 3x + y = 7
which has a corresponding augmented matrix
−3 −6 −12 317.
Notice that we have multiplied the entire first row by −3, and this changes the augmented matrix, but not the solution of the system. We are not allowed to multiply by zero because it would be like replacing one of the equations with 0 = 0, effectively destroying the information contained in the equation.
3. Row summing allows us to add one equation to another. In our example we could start with
−3x + −6y = −12 3x + y = 7
and replace the first equation with the sum of both equations. So we get
−3x + 3x + −6y + y = −12 + 7 3x + y = 7,
which after some simplification is translates to
0 −5 −5 317.
275
When using this row operation make sure that you end up with as many equations as you started with. Here we replaced the first equation with a sum, but the second equation remained untouched.
In the example, notice that the x-terms in the first equation disappeared, which makes it much easier to solve for y. Think about what the next steps for solving this system would be using the language of elementary row operations.
276
G.10 Elementary Row Operations: Worked Examples
Let us consider that we are given two systems of equations that give rise to the following two (augmented) matrices:
2520 2 52 9 1 1 1 0 1 0 5 10 14101036
and we want to find the solution to those systems. We will do so by doing Gaussian elimination.
For the first matrix we have
2520 2 1110 1 ∼
1410 1
R2 −2R1 ;R3 −R1 ∼
1110 1 2520 2 1410 1
1110 1 0300 0 0300 0
R1 ↔R2
1010 1 ∼ 0100 0
0000 0
1. We begin by interchanging the first two rows in order to get a 1 in the upper-left hand corner and avoiding dealing with fractions.
2. Next we subtract row 1 from row 3 and twice from row 2 to get zeros in the left-most column.
3. Then we scale row 2 to have a 1 in the eventual pivot.
4. Finally we subtract row 2 from row 1 and three times from row 2 to get it into Row-Reduced Echelon Form.
277
R1 −R2 ;R3 −3R2
1110 1 ∼0100 0
13 R 2
0300 0
Therefore we can write x = 1 − λ, y = 0, z = λ and w = μ, or in
vector form
x 1 −1 0 y=0+λ 0 +μ0.
z 0 1 0 w001
Now for the second system we have
52 9 52 9 15 R 2
05 10∼01 2 036036
52 9 ∼ 0 1 2
R3 −3R2
000
50 5 ∼ 0 1 2
R1 −2R2
000
10 1 ∼01 2
51 R 1
000
We scale the second and third rows appropriately in order to avoid fractions, then subtract the corresponding rows as before. Fi- nally scale the first row and hence we have x = 1 and y = 2 as a unique solution.
278
G.11 Elementary Row Operations: Explanation of Proof for Theorem 3.1
The first thing to realize is that there are choices in the Gaus- sian elimination recipe, so maybe that could lead to two different RREF’s and in turn two different solution sets for the same linear system. But that would be weird, in fact this Theorem says that this can never happen!
Because this proof comes at the end of the section it is often glossed over, but it is a very important result. Here’s a sketch of what happens in the video:
In words: we start with a linear system and convert it to an aug- mented matrix. Then, because we are studying a uniqueness state- ment, we try a proof by contradiction. That is the method where to show that a statement is true, you try to demonstrate that the opposite of the statement leads to a contradiction. Here, the opposite statement to the theorem would be to find two different RREFs for the same system.
Suppose, therefore, that Alice and Bob do find different RREF augmented matrices called A and B. Then remove all the non-pivot
279
columns from A and B until you hit the first column that differs. Record that in the last column and call the results A and B. Removing columns does change the solution sets, but it does not ruin row equivalence, so A and B have the same solution sets.
Now, because we left only the pivot columns (plus the first column that differs) we have
ˆ IN a ˆ IN b A= 0 0 andB= 0 0 ,
where IN is an identity matrix and a and b are column vectors. Importantly, by assumption,
a ̸= b .
So if we try to wrote down the solution sets for A and B they would be different. But at all stages, we only performed opera- tions that kept Alice’s solution set the same as Bob’s. This is a contradiction so the proof is complete.
280
G.12 Elementary Row Operations: Hint for Review Question 3
The first part for Review Question 3 is simple--just write out the associated linear system and you will find the equation 0 = 6 which is inconsistent. Therefore we learn that we must avoid a row of zeros preceding a non-vanishing entry after the vertical bar.
Turning to the system of equations, we first write out the aug- mented matrix and then perform two row operations
1−3 0 6 1 0 3 −3
2k3−k1
1−3 0 6 ∼ 0 3 3 −9.
0 k+6 3−k −11
Next we would like to subtract some amount of R2 from R3 to achieve
a zero in the third entry of the second column. But if
k + 6 = 3 − k ⇒ k = − 23 ,
this would produce zeros in the third row before the vertical line. You should also check that this does not make the whole third line zero. You now have enough information to write a complete solution.
R2 −R1 ;R3 −2R1
281
G.13 Solution Sets for Systems of Linear Equations: Planes
Here we want to describe the mathematics of planes in space. The video is summarised by the following picture:
A plane is often called R2 because it is spanned by two coordi- nates, and space is called R3 and has three coordinates, usually called (x,y,z). The equation for a plane is
ax + by + cz = d .
Lets simplify this by calling V = (x,y,z) the vector of unknowns
and N = (a, b, c). Using the dot product in R3 we have NV=d.
Remember that when vectors are perpendicular their dot products vanish. I.e. U V =0⇔U ⊥V. This means that if a vector V0 solves our equation N V = d, then so too does V0 + C whenever C is perpendicular to N. This is because
N (V0+C)=N V0+N C=d+0=d.
But C is ANY vector perpendicular to N, so all the possibilities for C span a plane whose normal vector is N. Hence we have shown that solutions to the equation ax + by + cz = 0 are a plane with normal vector N = (a, b, c).
282
G.14 Solution Sets for Systems of Linear Equations: Pictures and Explanation
This video considers solutions sets for linear systems with three unknowns. These are often called (x, y, z) and label points in R3. Lets work case by case:
If you have no equations at all, then any (x,y,z) is a solu- tion, so the solution set is all of R3. The picture looks a little silly:
For a single equation, the solution is a plane. This is ex- plained in this video or the accompanying script. The picture looks like this:
For two equations, we must look at two planes. These usu- ally intersect along a line, so the solution set will also (usually) be a line:
283
For three equations, most often their intersection will be a single point so the solution will then be unique:
Of course stuff can go wrong. Two different looking equations could determine the same plane, or worse equations could be inconsistent. If the equations are inconsistent, there will be no solutions at all. For example, if you had four equations determining four parallel planes the solution set would be empty. This looks like this:
284
G.15 Solution Sets for Systems of Linear Equations: Example
Here is an augmented matrix, let’s think about what the solution
set looks like
1032 0101
This looks like the system
equation
and for a third equation we can write
x3 = μ
285
x1+3x3 =2 x2 = 1
Notice that when the system is written this way the copy of the
1 0
2 × 2 identity matrix 0 1 makes it easy to write a solution
in terms of the variables x1 and x2. We will call x1 and x2 the 3
pivot variables. The third column 0 does not look like part of
an identity matrix, and there is no 3×3 identity in the augmented matrix. Notice there are more variables than equations and that this means we will have to write the solutions for the system in terms of the variable x3. We’ll call x3 the free variable.
Let x3 = μ. Then we can rewrite the first equation in our system
x1+3x3 =2 x1+3μ = 2
x1 = 2−3μ.
Then since the second equation doesn’t depend on μ we can keep the
x2 = 1,
so that we get system
x1 2 − 3μ x2= 1
x3 μ
2 −3μ = 1+ 0
0μ 2 −3
= 1+μ0. 01
So for any value of μ will give a solution of the system, and any system can be written in this form for some value of μ. Since there are multiple solutions, we can also express them as a set:
x1 2 −3
x2=1+μ0 μ∈R . x3 0 1
286
G.16 Solution Sets for Systems of Linear Equations: Hint
For the first part of this problem, the key is to consider the vector as a n × 1 matrix. For the second part, all you need to show is that
M(α · X + β · Y ) = α · (MX) + β · (MY ) where α, β ∈ R (or whatever field we are using) and
y1
y2 Y =..
. yk
Note that this will be somewhat tedious, and many people use sum- mation notation or Einstein’s summation convention with the added notation of Mj denoting the j-th row of the matrix. For example, for any j we have
k
(MX)j =ajixi =ajixi.
i=1
You can see a concrete example after the definition of the lin-
earity property.
287
G.17 Vectors in Space, n-Vectors: Overview
What is the space Rn. In short it is the usual vectors we are used to. For example, if n = 1, then it is just the number line where we either move in the positive or negative directions, and we clearly have a notion of distance. This is something you should understand well, but ultimately there is nothing really interesting that goes on here.
Luckily when n = 2, things begin to get interesting. It is lucky for us because we can represent this by drawing arrows on paper. However what is interesting is that we no longer just have two directions, but an infinite number which we typically encapsulate as 0 to 2π radians (i.e. 0 to 360 degrees). Recall that the length of the vector is also known as its magnitude. We can add vectors by putting them head to toe and we can scale our vectors, and this concept is useful in physics such as Force Vector Diagrams. So why is this system R2? The answer comes from trigonometry, and what I have described is polar coordinates which you should be able to translate back to the usual Cartesian coordinates of (x,y). You still should be familiar with what things look like here.
Now for R3, if we look at this in Cartesian coordinates (x,y,z), this is exactly the same as R2, just we can now move around in our usual ‘‘3D’’ space by basically being able to draw in the air. Now our notion of direction in somewhat more complicated using azimuth and altitude (see Figure G.17 below), but it is secretly still there. So we will just use the tuple to encapsulate the data of it’s direction and magnitude. Also we can equivalently write our tuple (x,y,z) as
x y
z
so our notation is consistent with matrix multiplication. Thus for all n ≥ 3, we just use the tuple (x1,x2,...,xn) to encapsulate our direction and magnitude and you can just treat vectors in Rn the same way as you would for vectors in R2.
Just one final closing remark; I have been somewhat sloppy through here on points and vectors, so make sure you read the note: Points Versus Vectors or Appendix D.
288
Altitude
Azimuth
Figure 2: The azimuth and altitude in spherical coordinates.
G.18 Vectors in Space, n-Vectors: Review of Paramet- ric Notation
The equation for a plane in three variables x, y and z looks like ax + by + cz = d
where a, b, c, and d are constants. Lets look at the example x + 2y + 5z = 3 .
In fact this is a system of linear equations whose solutions form a plane with normal vector (1, 2, 5). As an augmented matrix the system is simply
1 2 5 3 .
This is actually RREF! So we can let x be our pivot variable and y, z be represented by free parameters λ1 and λ2:
x = λ1 , y = λ2 . Thus we write the solution as
x = −2λ1 −5λ2 +3 y = λ1
z= λ2
289
or in vector notation
x 3 −2 −5 y = 0 + λ1 1 + λ2 0 .
z001
This describes a plane parametric equation. Planes are ‘‘two- dimensional’’ because they are described by two free variables. Here’s a picture of the resulting plane:
290
G.19 Vectors in Space, n-Vectors: The Story of Your Life
This video talks about the weird notion of a ‘‘length-squared’’ for a vector v = (x, t) given by ||v||2 = x2−t2 used in Einstein’s theory of relativity. The idea is to plot the story of your life on a plane with coordinates (x,t). The coordinate x encodes where an event happened (for real life situations, we must replace x → (x, y, z) ∈ R3). The coordinate t says when events happened. Therefore you can plot your life history as a worldline as shown:
Each point on the worldline corresponds to a place and time of an event in your life. The slope of the worldline has to do with your speed. Or to be precise, the inverse slope is your velocity. Ein- stein realized that the maximum speed possible was that of light, often called c. In the diagram above c = 1 and corresponds to the lines x = ±t ⇒ x2 −t2 = 0. This should get you started in your search for vectors with zero length.
291
G.20 Vector Spaces: Examples of Each Rule
Lets show that R2 is a vector space. To do this (unless we in- vent some clever tricks) we will have to check all parts of the definition. Its worth doing this once, so here we go:
Before we start, remember that for R2 we define vector addition and scalar multiplication component-wise.
x1 (+i) Additive closure: We need to make sure that when we add x2
y1
and y that we do not get something outside the original
2
vector space R2. This just relies on the underlying structure of real numbers whose sums are again real numbers so, using our component-wise addition law we have
x1 y1 x1 +x2 2 x + y := y+y ∈R.
2212
(+ii) Additive commutativity: We want to check that when we add any two vectors we can do so in either order, i.e.
x1 y1 ? y1 x1 x+y=y+x.
2222
This again relies on the underlying real numbers which for any x, y ∈ R obey
x+y=y+x.
This fact underlies the middle step of the following compu-
tation
x1 y1 x1 + y1 y1 + x1 y1 x1 x+y=x+y=y+x=y+x,
22222222
which demonstrates what we wished to show.
(+iii) Additive Associativity: This shows that we needn’t specify with parentheses which order we intend to add triples of vec- tors because their sums will agree for either choice. What we have to check is
x1 y1 z1 ? x1 y1 z1 x+y+z=x+y+z.
222222
292
Again this relies on the underlying associativity of real numbers:
(x + y) + z = x + (y + z) . The computation required is
x1 y1 z1 x1 + y1 z1 (x1 + y1) + z1 x+y +z=x+y+z=(x+y)+z
222222222 x1 + (y1 + z1) x1 y1 + z1 x1 y1 z1
=x+(y+z)=y+y+z=x+y+z . 222122222
(iv) Zero: There needs to exist a vector ⃗0 that works the way we would expect zero to behave, i.e.
x1 ⃗ x1 y+0=y.
11
It is easy to find, the answer is
⃗ 0 0=0.
You can easily check that when this vector is added to any vector, the result is unchanged.
x1
(+v) Additive Inverse: We need to check that when we have x there is another vector that can be added to it so the sum is
, ⃗0. (Note that it is important to first figure out what ⃗0 is
x1 −x1 here!) The answer for the additive inverse of x is −x
22
x1 −x1 x1 − x1 0 ⃗ x +−x =x−x =0=0.
We are half-way done, now we need to consider the rules for scalar multiplication. Notice, that we multiply vectors by scalars (i.e. numbers) but do NOT multiply a vectors by vectors.
293
because
2222
2
(·i) Multiplicative closure: Again, we are checking that an op- eration does not produce vectors outside the vector space.
For a scalar a∈R, we require that a x
we compute using our component-wise rule for scalars times
vectors:
x1 ax1 a x = ax .
22
Since products of real numbers ax1 and ax2 are again real
numbers we see this is indeed inside R2.
(·ii) Multiplicative distributivity: The equation we need to check
is
x1 ? x1 x1 (a+b) x =a x +b x .
222
Once again this is a simple LHS=RHS proof using properties of the real numbers. Starting on the left we have
x1 (a + b)x1 ax1 + bx1 (a+b) x = (a+b)x = ax +bx
as required.
2222
ax1 bx1 x1 x1 = ax + bx =a x +b x ,
2222
(·iii) Additive distributivity: This time we need to check the equa- tion The equation we need to check is
x1 y1 ? x1 y1 ax+y=ax+ay,
2222
i.e., one scalar but two different vectors. The method is by now becoming familiar
x1 y1 x1 + y1 a(x1 + y1) ax+y =ax+y =a(x+y)
again as required.
222222
ax1 + ay1 ax1 ay1 x1 y1
= ax +ay = ax + ay =a x +a y , 222222
294
x1 2
2
lies in R . First
(·iv) Multiplicative associativity. Just as for addition, this is the requirement that the order of bracketing does not matter. We need to establish whether
x1? x1 (a.b)·x=a·b·x .
22
This clearly holds for real numbers a.(b.x) = (a.b).x. The com- putation is
x1 (a.b).x1 a.(b.x1 ) (b.x1 ) x1 (a.b)· x = (a.b).x = a.(b.x) =a. (b.x) =a· b· x ,
22222
which is what we want.
(·v) Unity: We need to find a special scalar acts the way we would
expect ‘‘1’’ to behave. I.e.
x1 x1 ‘‘1’’· x = x .
22
There is an obvious choice for this special scalar---just the real number 1 itself. Indeed, to be pedantic lets calculate
x1 1.x1 x1 1· x = 1.x = x .
222
Now we are done---we have really proven the R2 is a vector space so lets write a little square to celebrate.
295
G.21 Vector Spaces: Example of a Vector Space
This video talks about the definition of a vector space. Even though the defintion looks long, complicated and abstract, it is actually designed to model a very wide range of real life situa- tions. As an example, consider the vector space
V = {all possible ways to hit a hockey puck} .
The different ways of hitting a hockey puck can all be considered as vectors. You can think about adding vectors by having two play- ers hitting the puck at the same time. This picture shows vectors N and J corresponding to the ways Nicole Darwitz and Jenny Potter hit a hockey puck, plus the vector obtained when they hit the puck together.
You can also model the new vector 2J obtained by scalar multi- plication by 2 by thinking about Jenny hitting the puck twice (or a world with two Jenny Potters....). Now ask yourself questions like whether the multiplicative distributive law
2J + 2N = 2(J + N) make sense in this context.
296
G.22 Vector Spaces: Hint
I will only really worry about the last part of the problem. The problem can be solved by considering a non-zero simple polynomial, such as a degree 0 polynomial, and multiplying by i ∈ C. That is to say we take a vector p ∈ P3R and then considering i · p. This will violate one of the vector space rules about scalars, and you should take from this that the scalar field matters.
As a second hint, consider Q (the field of rational numbers). √√
ThisisnotavectorspaceoverRsince 2·1= 2∈/Q,soitis not closed under scalar multiplication, but it is clearly a vector space over Q.
297
G.23 Linear Transformations: A Linear and A Non- Linear Example
This video gives an example of a linear transformation as well as a transformation that is not linear. In what happens below remember the properties that make a transformation linear:
L(u + v) = L(u) + L(v) and L(cu) = cL(u) . The first example is the map
via
L : R2 −→ R2 , x 2 −3x
y → 1 1 y .
Here we focus on the scalar multiplication property L(cu) = cL(u) which needs to hold for any scalar c ∈ R and any vector u. The additive property L(u + v) = L(u) + L(v) is left as a fun exercise. The calculation looks like this:
x cx 2cx−3cy
L(cu) = L c y = L cy = 2x − 3y
The first equality uses the fact that u is a vector in R2, next comes the rule for multiplying a vector by a number, then the rule for the given linear transformation L is used. The c is then factored out and we recognize that the vector next to c is just our linear transformation again. This verifies the scalar multiplication property L(cu) = cL(u).
For a non-linear example lets take the vector space R1 = R with L : R −→ R
via
= c x + y
cx + cy x
= cL y = cL(u) .
x → x + 1 .
298
This looks linear because the variable x appears once, but the constant term will be our downfall! Computing L(cx) we get:
but on the other hand
L(cx) = cx + 1 ,
cL(x) = c(x + 1) = cx + c .
Now we see the problem, unless we are lucky and c = 1 the two expressions above are not linear. Since we need L(cu) = cL(u) for any c, the game is up! x → x + 1 is not a linear transformation.
299
G.24 Linear Transformations: Derivative and Integral of (Real) Polynomials of Degree at Most 3
For this, we consider the vector space PR3 of real coefficient polynomials p such that the degree of degp is at most 3. Let D denote the usual derivative operator and we note that it is linear, and we can write this as the matrix
0 1 0 0 D=0 0 2 0.
0 0 0 3 0000
Similarly now consider the map I where I(f) = f(x)dx is the indefinite integral on any integrable function f. Now we first note that for any α,β ∈ R, we have
I(α·p+β·q)= α·p(x)+β·q(x)dx
=α p(x)dx+β q(x)dx=αI(p)+βI(q),
so I is a linear map on functions. However we note that this is not
a well-defined map on vector spaces since the additive constant
states the image is not unique. For example I(3x2) = x3 + c where
c can be any constant. Therefore we have to perform a definite
integral instead, so we define I(f) := x f(y) dy. The other thing 0
we could do is explicitly choose our constant, and we note that this does not necessarily give the same map (ex. take the constant to be non-zero with polynomials which in-fact will make it non- linear).
Now going to our vector space PR3 , if we take any p(x) = αx3 + βx2 +γx+δ, then we have
I(p) = α4 x4 + β3 x3 + γ2 x2 + δx,
and we note that this is outside of PR3 . So to make our image in
PR3 , we formally set I(x3) = 0. Thus we can now (finally) write this 300
as the linear map I : PR3 → PR3 as the matrix: 0 0 0 0
Finally we have
and
I=1 0 0 0. 0 21 0 0
0 0 13 0
0 0 0 0 ID=0 1 0 0 0 0 1 0
0001
1 0 0 0 DI=0 1 0 0,
0 0 1 0 0000
and note the subspaces that are preserved under these composi- tions.
301
G.25 Linear Transformations: Linear Transformations Hint
The first thing we see in the problem is a definition of this new space Pn. Elements of Pn are polynomials that look like
a0 +a1t+a2t2 +...+antn
where the ai’s are constants. So this means if L is a linear transformation from P2 → P3 that the inputs of L are degree two polynomials which look like
a0 +a1t+a2t2
and the output will have degree three and look like
b0 +b1t+b2t2 +b3t3
We also know that L is a linear transformation, so what does that mean in this case? Well, by linearity we know that we can separate out the sum, and pull out the constants so we get
L(a0 + a1t + a2t2) = a0L(1) + a1L(t) + a2L(t2)
Just this should be really helpful for the first two parts of the problem. The third part of the problem is asking us to think about this as a linear algebra problem, so lets think about how we could write this in the vector notation we use in the class. We could write
a0 a0+a1t+a2t2 as a1
a2
And think for a second about how you add polynomials, you match up terms of the same degree and add the constants component-wise. So it makes some sense to think about polynomials this way, since vector addition is also component-wise.
We could also write the output
b0 +b1t+b2t2 +b3t3 as b1b3
302
b0 b2
Then lets look at the information given in the problem and think about it in terms of column vectors
L(1) = 4 but we can think of the input 1 = 1+0t+0t2 and the 1 4
0 output 4=4+0t+0t20t3 and write this as L( 0 )=0
00 0 0
0 L(t)=t3 This can be written as L( 1 )=0
01
L(t2) = t − 1 It might be a little trickier to figure out how to write t − 1 but if we write the polynomial out with the terms in order and with zeroes next to the terms that do not appear, we can see that
−1 t−1=−1+t+0t2 +0t3 corresponds to 1
SothiscanbewrittenasL( 0 )=1 10
0 −1 0
Now to think about how you would write the linear transforma- tion L as a matrix, first think about what the dimensions of the matrix would be. Then look at the first two parts of this problem to help you figure out what the entries should be.
303
0 0
G.26 Matrices: Adjacency Matrix Example
Lets think about a graph as a mini-facebook. In this tiny facebook there are only four people, Alice, Bob, Carl, and David.
Suppose we have the following relationships Alice and Bob are friends.
Alice and Carl are friends. Carl and Bob are friends.
David and Bob are friends.
Now draw a picture where each person is a dot, and then draw a line between the dots of people who are friends. This is an example of a graph if you think of the people as nodes, and the friendships as edges.
Now lets make a 4 × 4 matrix, which is an adjacency matrix for the graph. Make a column and a row for each of the four people. It will look a lot like a table. When two people are friends put a 1 the the row of one and the column of the other. For example Alice and Carl are friends so we can label the table below.
ABCD A1
B C1 D
304
We can continue to label the entries for each friendship. Here lets assume that people are friends with themselves, so the diag- onal will be all ones.
ABCD A1110 B1111 C1110 D0101
Then take the entries of this table as a matrix
1110 1111 1 1 1 0 0101
Notice that this table is symmetric across the diagonal, the same way a multiplication table would be symmetric. This is be- cause on facebook friendship is symmetric in the sense that you can’t be friends with someone if they aren’t friends with you too. This is an example of a symmetric matrix.
You could think about what you would have to do differently to draw a graph for something like twitter where you don’t have to follow everyone who follows you. The adjacency matrix might not be symmetric then.
305
G.27 Matrices: Do Matrices Commute?
This video shows you a funny property of matrices. Some matrix properties look just like those for numbers. For example numbers obey
and so do matrices:
a(bc) = (ab)c A(BC) = (AB)C.
This says the order of bracketing does not matter and is called associativity. Now we ask ourselves whether the basic property of numbers
holds for matrices
ab = ba , AB=? BA.
For this, firstly note that we need to work with square matrices even for both orderings to even make sense. Lets take a simple 2 × 2 example, let
1 a 1 b 1 0 A=01, B=01, C=a1.
In fact, computing AB and BA we get the same result 1 a+b
AB=BA= 0 1 ,
so this pair of matrices do commute. Lets try A and C:
so
1+a2 a 1 a AC= a 1 , and CA= a 1+a2
AC ̸= CA
and this pair of matrices does not commute. Generally, matrices usually do not commute, and the problem of finding those that do is a very interesting one.
306
G.28 Matrices: Hint for Review Question 4
This problem just amounts to remembering that the dot product of
x = (x1,x2,...,xn) and y = (y1,y2,...,yn) is x1y1 +x2y2 +···+xnyn .
Then try multiplying the above row vector times yT and compare.
307
G.29 Matrices: Hint for Review Question 5
The majority of the problem comes down to showing that matrices are right distributive. Let Mk is all n × k matrices for any n, and define the map fR:Mk →Mm by fR(M)=MR where R is some k×m matrix. It should be clear that fR(α·M) = (αM)R = α(MR) = αfR(M) for any scalar α. Now all that needs to be proved is that
fR(M + N) = (M + N)R = MR + NR = fR(M) + fR(N),
and you can show this by looking at each entry.
We can actually generalize the concept of this problem. Let V
be some vector space and M be some collection of matrices, and we say that M is a left-action on V if
(M · N) ◦ v = M ◦ (N ◦ v)
for all M,N ∈ N and v ∈ V where · denoted multiplication in M (i.e. standard matrix multiplication) and ◦ denotes the matrix is a linear map on a vector (i.e. M(v)). There is a corresponding notion of a right action where
v ◦ (M · N) = (v ◦ M) ◦ N
where we treat v◦M as M(v) as before, and note the order in which the matrices are applied. People will often omit the left or right because they are essentially the same, and just say that M acts on V .
308
G.30 Properties of Matrices: Matrix Exponential Ex- ample
This video shows you how to compute
0 θ exp −θ 0 .
For this we need to remember that the matrix exponential is defined by its power series
Now lets call
where the matrix
exp M := I + M + 1 M 2 + 1 M 3 + · · · . 2! 3!
0 θ
−θ 0 =iθ
0 1 i:= −1 0
and by matrix multiplication is seen to obey
i2 =−I, i3 =−i,i4 =I.
Using these facts we compute by organizing terms according to whether they have an i or not:
expiθ
= I+1θ2(−I)+1(+I)+··· 2! 4!
+ iθ+1θ3(−i)+1i+··· 3! 5!
= I(1−1θ2+1θ4+···) 2! 4!
+ i(θ−1θ3+1θ5+···) 3! 5!
= Icosθ+isinθ cosθ sinθ
= −sinθ cosθ .
Here we used the familiar Taylor series for the cosine and sine functions. A fun thing to think about is how the above matrix acts on vector in the plane.
309
G.31 Properties of Matrices: Explanation of the Proof
In this video we will talk through the steps required to prove
trMN = trNM .
There are some useful things to remember, first we can write
M = (mij) and N = (nij)
where the upper index labels rows and the lower one columns. Then
M N = m il n lj , l
where the ‘‘open’’ indices i and j label rows and columns, but the index l is a ‘‘dummy’’ index because it is summed over. (We could have given it any name we liked!).
Finally the trace is the sum over diagonal entries for which the row and column numbers must coincide
tr M = mi . i
Hence starting from the left of the statement we want to prove, we
have
LHS = trMN = milnli . il
Next we do something obvious, just change the order of the entries mil and nli (they are just numbers) so
m il n li = n li m il . il il
Equally obvious, we now rename i → l and l → i so milnli = nilmli .
il li
Finally, since we have finite sums it is legal to change the order
of summations
nilmli = nilmli . li il
310
This expression is the same as the one on the line above where we started except the m and n have been swapped so
milnli = trNM = RHS. il
This completes the proof.
311
G.32 Properties of Matrices: A Closer Look at the Trace Function
This seemingly boring function which extracts a single real number does not seem immediately useful, however it is an example of an element in the dual-space of all n × n matrices since it is a bounded linear operator to the underlying field F. By a bounded operator, I mean it will at most scale the length of the matrix (think of it as a vector in Fn2 ) by some fixed constant C > 0 (this can depend upon n), and for example if the length of a matrix M is d, then tr(M) ≤ Cd (I believe C = 1 should work).
Some other useful properties is for block matrices, it should be clear that we have
A B
tr C D =trA+trD.
and that
tr(P AP −1) = trP (AP −1) = tr(AP −1)P = tr(AP −1P ) = tr(A)
so the trace function is conjugate (i.e. similarity) invariant. Using a concept from Chapter 17, it is basis invariant. Addition- ally in later chapters we will see that the trace function can be used to calculate the determinant (in a sense it is the derivative of the determinant, see Lecture 13 Problem 5) and eigenvalues.
Additionally we can define the set sln as the set of all n × n matrices with trace equal to 0, and since the trace is linear and a·0 = 0, we note that sln is a vector space. Additionally we can use the fact tr(MN) = tr(NM) to define an operation called bracket
[M, N] = MN − NM,
and we note that sln is closed under bracket since
tr(MN − NM) = tr(MN) − tr(NM) = tr(MN) − tr(MN) = 0.
312
G.33 Properties of Matrices: Matrix Exponent Hint
This is a hint for computing exponents of matrices. So what is eA if A is a matrix? We remember that the Taylor series for
x ∞xn e= n!.
n=0 So as matrices we can think about
A ∞An e= n!.
n=0
This means we are going to have an idea of what An looks like for any n. Lets look at the example of one of the matrices in the problem. Let
1 λ A=01.
Lets compute An for the first few n. 010
A=01 11λ
A=01
2 1 2λ
A=A·A=01
3 2 13λ
A=A·A=01. There is a pattern here which is that
n 1nλ A=01,
then we can think about the first few terms of the sequence
eA =
∞An 11
n=0
n! = A0 + A + 2! A2 + 3! A3 + . . . . 313
Looking at the entries when we add this we get that the upper left- most entry looks like this:
1+1+1+1+…=∞ 1=e1. 2 3! n=0n!
Continue this process with each of the entries using what you know about Taylor series expansions to find the sum of each entry.
314
G.34 Inverse Matrix: A 2 × 2 Example
Lets go though and show how this 2 × 2 example satisfies all of
these properties. Lets look at
73 M= 115
We have a rule to compute the inverse
a b −1 1 d −b cd =ad−bc−ca
So this means that
M−1= 1 5 −3 35−33 −11 7
M M=35−33 −11 7 11 5 =2 0 2 =I You can compute MM−1, this should work the other way too.
Now lets think about products of matrices
1 3 1 0 LetA= 15 andB= 21
Notice that M = AB. We have a rule which says that (AB)−1 = B−1A−1. Lets check to see if this works
Lets check that M−1M = I = MM−1.
−1 1 5 −373 120
and
−115−3 −110 A=2−11 andB=−21
−1−1 105−3120 BA=−21 −11=202
315
G.35 Inverse Matrix: Hints for Problem 3
First I want to state that (b) implies (a) is the easy direction by just thinking about what it means for M to be non-singular and for a linear function to be well-defined. Therefore we assume that M is singular which implies that there exists a non-zero vector X0 such that MX0 = 0. Now assume there exists some vector XV such that MXV = V, and look at what happens to XV +c·X0 for any c in your field. Lastly don’t forget to address what happens if XV does not exist.
316
G.36 Inverse Matrix: Left and Right Inverses
This video is a hint for question 4 in the Inverse Matrixlec- ture 10. In the lecture, only inverses for square matrices were discussed, but there is a notion of left and right inverses for matrices that are not square. It helps to look at an example with bits to see why. To start with we look at vector spaces
Z32 = {(x,y,z)|x,y,z = 0,1} and Z22 .
These have 8 and 4 vectors, respectively, that can be depicted as
corners of a cube or square:
Z32 or Z2
Now lets consider a linear transformation
L : Z 32 − → Z 2 2 .
This must be represented by a matrix, and lets take the example
x 0 1 1x Ly= 1 1 0 y:=AX.
zz
317
Since we have bits, we can work out what L does to every vector, this is listed below
L
(0, 0, 0) → L
(0, 0) (1, 0)
(1, 0) (0, 1)
(0, 1) (1, 1)
(0, 0, 1) → L
(1, 1, 0) → L
(1, 0, 0) → L
(0, 1, 1) → L
(0, 1, 0) → L
(1, 0, 1) → L
(1, 1) (1, 1)
(1, 1, 1) →
Now lets think about left and right inverses. A left inverse B to
the matrix A would obey
BA = I
and since the identity matrix is square, B must be 2 × 3. It would have to undo the action of A and return vectors in Z32 to where they started from. But above, we see that different vectors in Z32 are mapped to the same vector in Z2 by the linear transformation L with matrix A. So B cannot exist. However a right inverse C obeying
AC = I
can. It would be 2×2. Its job is to take a vector in Z2 back to one in Z32 in a way that gets undone by the action of A. This can be done, but not uniquely.
318
G.37 LU Decomposition: Example: How to Use LU Decomposition
Lets go through how to use a LU decomposition to speed up solving a system of equations. Suppose you want to solve for x in the equation Mx = b
10−5 6 3 −1 −14 x=19 10−3 4
where you are given the decomposition of M into the product of L and U which are lower and upper and lower triangular matrices respectively.
1 0 −5 1001 0 −5 M=3 −1 −14=3 1 00 −1 1 =LU
10−3 102001
First you should solve L(Ux) = b for Ux. The augmented matrix you
would use looks like this
1006 3 1 0 19 1024
This is an easy augmented matrix to solve because it is upper triangular. If you were to write out the three equations using variables, you would find that the first equation has already been solved, and is ready to be plugged into the second equation. This backward substitution makes solving the system much faster. Try it and in a few steps you should be able to get
1006
0101
0 0 1 −1 6
This tells us that Ux = 1 . Now the second part of the problem −1
is to solve for x. The augmented matrix you get is
1 0 −5 6 0−11 1 0 0 1 −1
319
It should take only a few step to transform it into
1001 0 1 0 −2,
0 0 1 −1 1
which gives us the answer x = −2. −1
320
G.38 LU Decomposition: Worked Example Here we will perform an LU decomposition on the matrix
1 7 2 M=−3 −21 4
163
following the procedure outlined in Section 11.2. So initially we have L1 =I3 and U1 =M, and hence
that
1 7 2
L ′2 U 2′ = 1 6 3 = M ′
1 0 0 L2=−310
1 0 1
1 7 2 U2=0 0 10.
0 −1 −1
However we now have a problem since 0·c = 0 for any value of c since we are working over a field, but we can quickly remedy this by swapping the second and third rows of U2 to get U2′ and note that we just interchange the corresponding rows all columns left of and including the column we added values to in L2 to get L′2. Yet this gives us a small problem as L′2U2′ ̸= M; in fact it gives us the similar matrix M′ with the second and third rows swapped. In our original problem M X = V , we also need to make the corresponding swap on our vector V to get a V′ since all of this amounts to changing the order of our two equations, and note that this clearly does not change the solution. Back to our example, we have
1 0 0 L′2=1 10
−3 0 1
and note that U2′ is upper triangular. Finally you can easily see
−3 −21 4
which solves the problem of L′2U2′X = M′X = V ′. (We note that as
augmented matrices (M′|V ′) ∼ (M|V ).)
321
1 7 2 U2′=0−1−1,
0 0 10
G.39 LU Decomposition: Block LDU Explanation
This video explains how to do a block LDU decomposition. Firstly remember some key facts about block matrices: It is important that the blocks fit together properly. For example, if we have matrices
matrix shape
X r×r Y r×t Z t×r
W t×t
we could fit these together as a (r + t) × (r + t) square block matrix
XY M=ZW.
Matrix multiplication works for blocks just as for matrix entries:
2 X YX Y X2+YZ XY+YW M = Z W Z W = ZX+WZ ZY+W2 .
Now lets specialize to the case where the square matrix X has an inverse. Then we can multiply out the following triple product of a lower triangular, a block diagonal and an upper triangular matrix:
I0X 0 IX−1Y ZX−1I 0W−ZX−1Y 0I
X 0 I X−1Y =ZW−ZX−1Y 0I
XY = ZX−1Y+Z W−ZX−1Y
XY =ZW=M.
This shows that the LDU decomposition given in Section 11 is correct.
322
G.40 Elementary Matrices and Determinants: Permu- tations
Lets try to get the hang of permutations. A permutation is a function which scrambles things. Suppose we had
This looks like a function σ that has values σ(1) = 3,σ(2) = 2,σ(3) = 4,σ(4) = 1
Then we could write this as
1 2 3 41234 σ(1) σ(2) σ(3) σ(4) = 3 2 4 1
We could write this permutation in two steps by saying that first weswap3and4,andthenweswap1and3. Theorderhereis important.
This is an even permutation, since the number of swaps we used is two (an even number).
323
G.41 Elementary Matrices and Determinants: Some Ideas Explained
This video will explain some of the ideas behind elementary matri- ces. First think back to linear systems, for example n equations in n unknowns:
a1x1 +a12x2 +···+a1nxn
= v1 = v2
= vn .
a2x1 +a2x2 +···+a2xn
12n .
an1x1 +an2x2 +···+anxn
We know it is helpful to store the above information with matrices and vectors
a1 a1 ··· a1 x1 v1 12n
a2 a2 ··· a2 x2 v2
12 n M:=. . ., X:=., V :=..
a n1 a n2 · · · a n n x n v n
Here we will focus on the case the M is square because we are interested in its inverse M−1 (if it exists) and its determinant (whose job it will be to determine the existence of M−1).
We know at least three ways of handling this linear system prob- lem:
1. As an augmented matrix
MV.
Here our plan would be to perform row operations until the
system looks like
I M−1V, (assuming that M−1 exists).
2. As a matrix equation
MX = V ,
which we would solve by finding M−1 (again, if it exists), so that
X = M−1V . 324
3. As a linear transformation
Lets focus on the first two methods. In particular we want to think about how the augmented matrix method can give information about finding M−1. In particular, how it can be used for handling determinants.
The main idea is that the row operations changed the augmented matrices, but we also know how to change a matrix M by multiplying it by some other matrix E, so that M → EM. In particular can we find ‘‘elementary matrices’’ the perform row operations?
Once we find these elementary matrices is is very important to ask how they effect the determinant, but you can think about that for your own self right now.
Lets tabulate our names for the matrices that perform the var- ious row operations:
Row operation
R i ↔ R j
Ri → λRi Ri → Ri + λRj
Elementary Matrix
E ji Ri(λ) Sji(λ)
L : Rn −→ Rn
Rn ∋ X −→ MX ∈ Rn .
via
In this case we have to study the equation L(X) = V because
V ∈Rn.
To finish off the video, here is how all these elementary ma- trices work for a 2 × 2 example. Lets take
a b M=cd.
A good thing to think about is what happens to det M = ad − bc under the operations below.
Row swap:
1 0 1 1 0 1a b c d
E2=10, E2M=10 cd=ab. 325
Scalar multiplying:
1 λ0 1 λ0ab λaλb
R(λ)=01, E2M=01cd=cd. Row sum:
1 1 λ 1 1 λa b a+λc b+λd S2(λ)= 0 1 , S2(λ)M= 0 1 c d = c d .
326
G.42 Elementary Matrices and Determinants: Hints for Problem 4
Here we will examine the inversion number and the effect of the transposition τ1,2 and τ2,4 on the permutation ν = [3, 4, 1, 2]. Recall that the inversion number is basically the number of items out of order. So the inversion number of ν is 4 since 3>1 and 4>1 and 3 > 2 and 4 > 2. Now we have τ1,2ν = [4,3,1,2] by interchanging the first and second entries, and the inversion number is now 5 since we now also have 4 > 3. Next we have τ2,4ν = [3, 2, 1, 4] whose inversion number is 3 since 3 > 2 > 1. Finally we have τ1,2τ2,4ν = [2, 3, 1, 4] and the resulting inversion number is 2 since 2 > 1 and 3 > 1. Notice how when we are applying τi,j the parity of the inversion number changes.
327
G.43 Elementary Matrices and Determinants II: Ele- mentary Determinants
This video will show you how to calculate determinants of elemen- tary matrices. First remember that the job of an elementary row matrix is to perform row operations, so that if E is an elementary row matrix and M some given matrix,
EM
is the matrix M with a row operation performed on it.
The next thing to remember is that the determinant of the iden- tity is 1. Moreover, we also know what row operations do to deter-
minants:
Row swap Eji: flips the sign of the determinant.
Scalar multiplication Ri(λ): multiplying a row by λ multi- plies the determinant by λ.
Row addition Sji(λ): adding some amount of one row to another does not change the determinant.
The corresponding elementary matrices are obtained by performing exactly these operations on the identity:
1
0 1
i
Ej =
1 0
1 1
Ri(λ) = λ ,
,
328
1
…
…
…
…
…
1
1 λ
S ji ( λ ) =
1
1
So to calculate their determinants, we just have to apply the above list of what happens to the determinant of a matrix under row operations to the determinant of the identity. This yields
detEji = −1, detRi(λ) = λ, detSji(λ) = 1.
329
…
…
…
G.44 Elementary Matrices and Determinants II: De- terminants and Inverses
Lets figure out the relationship between determinants and in- vertibility. If we have a system of equations Mx = b and we have the inverse M−1 then if we multiply on both sides we get x = M−1Mx = M−1b. If the inverse exists we can solve for x and get a solution that looks like a point.
So what could go wrong when we want solve a system of equations and get a solution that looks like a point? Something would go wrong if we didn’t have enough equations for example if we were just given
x+y=1
or maybe, to make this a square matrix M we could write this as
x+y=1 0=0
1 1
The matrix for this would be M = 0 0 and det(M)=0. When we
compute the determinant, this row of all zeros gets multiplied in every term. If instead we were given redundant equations
x+y=1 2x + 2y = 2
1 1
The matrix for this would be M = 2 2 and det(M)=0. But we
know that with an elementary row operation, we could replace the second row with a row of all zeros. Somehow the determinant is able to detect that there is only one equation here. Even if we had a set of contradictory set of equations such as
x+y=1 2x + 2y = 0,
where it is not possible for both of these equations to be true, the matrix M is still the same, and still has a determinant zero.
330
Lets look at a three by three example, where the third equation is the sum of the first two equations.
x+y+z=1 y+z=1 x + 2y + 2z = 2
and the matrix for this is
1 1 1 M=0 1 1
122
If we were trying to find the inverse to this matrix using ele- mentary matrices
111100 111 1 00 011010=011 0 10 1 2 2 0 0 1 0 0 0 −1 −1 1
And we would be stuck here. The last row of all zeros cannot be converted into the bottom row of a 3 × 3 identity matrix. this matrix has no inverse, and the row of all zeros ensures that the determinant will be zero. It can be difficult to see when one of the rows of a matrix is a linear combination of the others, and what makes the determinant a useful tool is that with this reasonably simple computation we can find out if the matrix is invertible, and if the system will have a solution of a single point or column vector.
331
G.45 Elementary Matrices and Determinants II: Prod- uct of Determinants
Here we will prove more directly that the determinant of a prod- uct of matrices is the product of their determinants. First we reference that for a matrix M with rows ri, if M′ is the matrix with rows rj′ = rj +λri for j ̸= i and ri′ = ri, then det(M) = det(M′) Essentially we have M′ as M multiplied by the elementary row sum matrices Sji(λ). Hence we can create an upper-triangular matrix U such that det(M) = det(U) by first using the first row to set m1i → 0 for all i > 1, then iteratively (increasing k by 1 each time) for fixed k using the k-th row to set mki →0 for all i>k.
Now note that for two upper-triangular matrices U = (uji ) and U′ = (u′j), by matrix multiplication we have X = UU′ = (xj) is
ii
upper-triangular and xi = uiu′i. Also since every permutation would iii
contain a lower diagonal entry (which is 0) have det(U) = i ui. Let A and A′ have corresponding upper-triangular matrices U and U′ respectively (i.e. det(A) = det(U)), we note that AA′ has a corresponding upper-triangular matrix UU′, and hence we have
det(AA′) = det(UU′) = uiu′i ii
i
= ui u′i
ii ii
= det(U)det(U′) = det(A)det(A′).
332
G.46 Properties of the Determinant: Practice taking Determinants
Lets practice taking determinants of 2 × 2 and 3 × 3 matrices. For 2 × 2 matrices we have a formula
a b
det c d =ad−bc
This formula can be easier to remember when you think about this picture.
Now we can look at three by three matrices and see a few ways to compute the determinant. We have a similar pattern for 3 × 3 matrices.
Consider the example
1 2 3
det 3 1 2 = ((1·1·1)+(2·2·0)+(3·3·0))−((3·1·0)+(1·2·0)+(3·2·1)) = −5
001
We can draw a picture with similar diagonals to find the terms that will be positive and the terms that will be negative.
333
Another way to compute the determinant of a matrix is to use this recursive formula. Here I take the coefficients of the first row and multiply them by the determinant of the minors and the cofactor. Then we can use the formula for a two by two determinant to compute the determinant of the minors
1 2 3 1 2 3 2 3 1
det 3 1 2 = 1 0 1−2 0 1+3 0 0 = 1(1−0)−2(3−0)+3(0−0) = −5
001
Decide which way you prefer and get good at taking determinants,
you’ll need to compute them in a lot of problems.
334
G.47 Properties of the Determinant: The Adjoint Matrix
In this video we show how the adjoint matrix works in detail for the 3×3 case. Recall, that for a 2 × 2 matrix
the matrix
a b M=cd,
d −b N= −c a
had the marvelous property
MN = (detM)I
(you can easily check this for yourself). We call
N := adjM ,
the adjoint matrix of M. When the determinant detM ̸= 0, we can
use it to immediately compute the inverse
M−1= 1 adjM. det M
Lets now think about a 3 × 3 matrix
a b c M=d e f.
ghi
The first thing to remember is that we can compute the determinant by expanding in a row and computing determinants of minors, so
a b d g e f detM=adet c d −bdet f i +cdet h i .
We can think of this as the product of a row and column vector
ab det c d
dg
detM= abc −det f i .
ef det h i
335
Now, we try a little experiment. Lets multiply the same column vector by the other two rows of M
a b a b det c d det c d
d g d g
def −det f i =0= ghi −det f i
e f e f det h i det h i
The answer, ZERO, for both these computations, has been written in already because it is obvious. This is because these two computa- tions are really computing
d e f g h i detd e f and detd e f.
ghi ghi
These vanish because the determinant of an matrix with a pair of equal rows is zero. Thus we have found a nice result
ab det c d
d g 1 M −det = detM 0 .
fi0 ef
det h i
Notice the answer is the number detM times the first column of the identity matrix. In fact, the column vector above is exactly the first column of the adjoint matrix adjM. The rule how to get the rest of the adjoint matrix is not hard. You first compute the cofactor matrix obtained by replacing the entries of M with the signed determinants of the corresponding minors got by deleting the row and column of the particular entry. For the 3×3 case this is
e f d f d e det h i −det g i det g h
b c a c a b
cofactorM=−det h i det g i −det g h .
bc ac ab det e f −det d f det c d
336
Then the adjoint is just the transpose
adjM = cofactorMT .
Computing all this is a little tedious, but always works, even for
any n × n matrix. Moreover, when det M ̸= 0, we thus obtain the
inverse M−1 = 1 adjM. det M
337
G.48 Properties of the Determinant: Hint for Prob- lem 3
For an arbitrary 3 × 3 matrix A = (aij ), we have
det(A) = a1a2a3 + a12a23a31 + a13a21a32 − a1a23a32 − a12a21a3 − a13a2a31
and so the complexity is 5a + 12m. Now note that in general, the complexity cn of the expansion minors formula of an arbitrary n×n matrix should be
cn = (n−1)a+ncn−1m
since det(A) = ni=1(−1)ia1i cofactor(a1i ) and cofactor(a1i ) is an (n − 1) ×
(n − 1) matrix. This is one way to prove part (c).
338
G.49 Subspaces and Spanning Sets: Worked Example
Suppose that we were given a set of linear equations lj (x1, x2, . . . , xn) and we want to find out if lj(X) = vj for all j for some vector V = (vj). We know that we can express this as the matrix equation
l ij x i = v j i
where lij is the coefficient of the variable xi in the equation lj. However, this is also stating that V is in the span of the vectors {Li}i where Li = (lij)j. For example, consider the set of equations
2x + 3y − z = 5 −x + 3y + z = 1 x + y − 2z = 3
which corresponds to the matrix equation
2 3 −1x 5 −1 3 1y=1.
11−2z 3
We can thus express this problem as determining if the vector
lies in the span of
5 V = 1
3
2 3 −1
−1,3, 1 . 1 1 −2
339
G.50 Subspaces and Spanning Sets: Hint for Prob- lem 2
We want to check whether
x−x3 ∈span{x2,2x+x2,x+x3}
If you are wondering what it means to be in the span of these polynomials here is an example
2(x2) + 5(2x + x2) ∈ span{x2, 2x + x2, x + x3}
Linear combinations where the polynomials are multiplied by scalars in R is fine. We are not allowed to multiply the polynomials to- gether, since in a vector space there is not necessarily a notion of multiplication for two vectors.
Lets put this problem in the language of matrices. Since we can write x2 = 0+0x+1×2 +0x3 we can write it as a column vector, where the coefficient of each of the terms is an entry.
0 0 0 x2 =0 , 2x+x2 =2 and x+x3 =1
1 1 0 001
0 Sincewewanttofindoutifx−x3 =1isinthespanofthese
polynomials above we can ask, do there exist r1, r2 and r3 such that 0 0 0r1 0
021r2 =1 1 1 0 0
001r3 −1
There are two ways to do this, one is by finding a r1, r2 and r3 that work, another is to notice that there are no constant terms in any of the equations and to simplify the system so that it becomes
0 2 1r1 1 1 1 0r2=0 001r3 −1
340
0 −1
From here you can determine if the now square matrix has an inverse. If the matrix has an inverse you can say that there are r1, r2 and r3 that satisfy this equation, without actually finding them.
341
G.51 Subspaces and Spanning Sets: Hint
This is a hint for the problem on intersections and unions of subspaces.
For the first part, try drawing an example in R3:
Here we have taken the subspace W to be a plane through the origin and U to be a line through the origin. The hint now is to think about what happens when you add a vector u ∈ U to a vector w ∈ W. Does this live in the union U ∪ W?
For the second part, we take a more theoretical approach. Lets suppose that v∈U∩W and v′ ∈U∩W. This implies
v∈U and v′∈U.
So, since U is a subspace and all subspaces are vector spaces, we
know that the linear combination
αv+βv′ ∈U.
Now repeat the same logic for W and you will be nearly done.
342
G.52 Linear Independence: Worked Example
This video gives some more details behind the example for the fol- lowing four vectors in R3 Consider the following vectors in R3:
4 −3 5 −1 v1 =−1, v2 =7, v3 =12, v4 =1.
3 4 17 0
The example asks whether they are linearly independent, and the answer is immediate: NO, four vectors can never be linearly in- dependent in R3. This vector space is simply not big enough for that, but you need to understand the notion of the dimension of a vector space to see why. So we think the vectors v1, v2, v3 and v4 are linearly dependent, which means we need to show that there is a solution to
α1v1 +α2v2 +α3v3 +α4v4 = 0
for the numbers α1, α2, α3 and α4 not all vanishing.
To find this solution we need to set up a linear system. Writing
out the above linear combination gives
4α1 −3α2 +5α3 −α4 = 0, −α1 +7α2 +12α3 +α4 = 0 , 3α1 +4α2 +17α3 = 0.
This can be easily handled using an augmented matrix whose columns are just the vectors we started with
4 −3 5 −1 0, −1 7 12 1 0,.
3 4 17 0 0.
Since there are only zeros on the right hand column, we can drop
it. Now we perform row operations to achieve RREF
4 −3 5 −1 −17121∼0125 25.
3 4 17 0
343
0000
1071 −4 25 25
53 3
This says that α3 and α4 are not pivot variable so are arbitrary, we set them to μ and ν, respectively. Thus
α1 =−71μ+ 4 ν, α2 =−53μ− 3 ν, α3 =μ, α4 =ν. 25 25 25 25
Thus we have found a relationship among our four vectors
− 71 μ + 4 ν v1 + − 53 μ − 3 ν v2 + μ v3 + μ4 v4 = 0 . 25 25 25 25
In fact this is not just one relation, but infinitely many, for any choice of μ,ν. The relationship quoted in the notes is just one of those choices.
Finally, since the vectors v1, v2, v3 and v4 are linearly depen- dent, we can try to eliminate some of them. The pattern here is to keep the vectors that correspond to columns with pivots. For example, setting μ = −1 (say) and ν = 0 in the above allows us to solve for v3 while μ = 0 and ν = −1 (say) gives v4, explicitly we get
v3 = 71 v1 + 53 v2 , v4 = − 4 v3 + 3 v4 . 25 25 25 25
This eliminates v3 and v4 and leaves a pair of linearly independent vectors v1 and v2.
344
G.53 Linear Independence: Proof of Theorem 16.1
Here we will work through a quick version of the proof. Let {vi} denote a set of linearly dependent vectors, so i civi = 0 where there exists some ck ̸= 0. Now without loss of generality we order our vectors such that c1 ̸= 0, and we can do so since addition is commutative (i.e. a + b = b + a). Therefore we have
n
c1v1 =−civi
i=2 n ci
v1 = −
and we note that this argument is completely reversible since every
ci ̸= 0 is invertible and 0/ci = 0.
345
i=2
c1 vi
G.54 Linear Independence: Hint for Problem 1
Lets first remember how Z2 works. The only two elements are 1 and 0. Which means when you add 1 + 1 you get 0. It also means when you have a vector ⃗v ∈ Bn and you want to multiply it by a scalar, your only choices are 1 and 0. This is kind of neat because it means that the possibilities are finite, so we can look at an entire vector space.
Now lets think about B3 there is choice you have to make for each coordinate, you can either put a 1 or a 0, there are three places where you have to make a decision between two things. This means that you have 23 = 8 possibilities for vectors in B3.
When you want to think about finding a set S that will span B3 and is linearly independent, you want to think about how many vectors you need. You will need you have enough so that you can make every vector in B3 using linear combinations of elements in S but you don’t want too many so that some of them are linear com- binations of each other. I suggest trying something really simple perhaps something that looks like the columns of the identity ma- trix
For part (c) you have to show that you can write every one of the elements as a linear combination of the elements in S, this will check to make sure S actually spans B3.
For part (d) if you have two vectors that you think will span the space, you can prove that they do by repeating what you did in part (c), check that every vector can be written using only copies of of these two vectors. If you don’t think it will work you should show why, perhaps using an argument that counts the number of possible vectors in the span of two vectors.
346
G.55 Basis and Dimension: Proof of Theorem
Lets walk through the proof of this theorem. We want to show that for S = {v1, . . . , vn} a basis for a vector space V , then every vector w ∈ V can be written uniquely as a linear combination of vectors in the basis S:
w = c1v1 + · · · + cnvn.
We should remember that since S is a basis for V , we know two
things
V =spanS
v1 , . . . , vn are linearly independent, which means that whenever we have a1v1 +…+anvn = 0 this implies that ai = 0 for all i = 1,…,n.
This first fact makes it easy to say that there exist constants ci such that w = c1v1 +···+cnvn. What we don’t yet know is that these c1,…cn are unique.
In order to show that these are unique, we will suppose that they are not, and show that this causes a contradiction. So suppose there exists a second set of constants di such that
w = d1v1 + · · · + dnvn .
For this to be a contradiction we need to have ci ̸= di for some i. Then look what happens when we take the difference of these two versions of w:
0V =w−w
= (c1v1 +···+cnvn)−(d1v1 +···+dnvn) = (c1 −d1)v1 +···+(cn −dn)vn.
Since the vi’s are linearly independent this implies that ci − di = 0 for all i, this means that we cannot have ci ̸= di, which is a contradiction.
347
G.56 Basis and Dimension: Worked Example
In this video we will work through an example of how to extend a set of linearly independent vectors to a basis. For fun, we will take the vector space
V = {(x,y,z,w)|x,y,z,w ∈ Z5}.
This is like four dimensional space R4 except that the numbers can
only be {0,1,2,3,4}. This is like bits, but now the rule is 0=5.
Thus, for example, 41 = 4 because 4 = 16 = 1+3×5 = 1. Don’t get too caught up on this aspect, its a choice of base field designed to make computations go quicker!
Now, here’s the problem we will solve:
1 0
3 2 Find a basis for V that includes the vectors 2 and 3.
41
The way to proceed is to add a known (and preferably simple) basis to the vectors given, thus we consider
1 0 1 0 0 0
3 2 0 0 1 0 v1 =2, v2 =3, e1 =0, e2 =1, e3 =0, e4 =0.
410001
The last four vectors are clearly a basis (make sure you understand this….) and are called the canonical basis. We want to keep v1 and v2 but find a way to turf out two of the vectors in the canonical basis leaving us a basis of four vectors. To do that, we have to study linear independence, or in other words a linear system problem defined by
0=α1e1 +α2e2 +α3v1 +α4v2 +α5e3 +α6e4. 348
We want to find solutions for the α′s which allow us to determine two of the e′s. For that we use an augmented matrix
1010000 2301000. 3 2 0 0 1 0 0
4100010
Next comes a bunch of row operations. Note that we have dropped the last column of zeros since it has no information–you can fill in the row operations used above the ∼’s as an exercise:
101000 101000 2 3 0 1 0 0∼0 3 3 1 0 0
320010 022010
410001 011001
101000 101000 ∼0 1 1 2 0 0∼0 1 1 2 0 0 022010 000110
011001 000301 101000 101000
∼0 1 1 0 3 0∼0 1 1 0 3 0
000110 000110 000021 000013
1 0 1 0 0 0 ∼0 1 1 0 0 1 0 0 0 1 0 2
000013
The pivots are underlined. The columns corresponding to non-pivot variables are the ones that can be eliminated–their coefficients (the α’s) will be arbitrary, so set them all to zero save for the one next to the vector you are solving for which can be taken to be unity. Thus that vector can certainly be expressed in terms of
previous ones. Hence, altogether, our basis is
1000
2, 3, 1, 0.
3 2 0 1 4100
349
Finally, as a check, note that e1 = v1 + v2 which explains why we had to throw it away.
350
G.57 Basis and Dimension: Hint for Problem 2
Since there are two possible values for each entry, we have |Bn| = 2n. We note that dimBn = n as well. Explicitly we have B1 = {(0),(1)} so there is only 1 basis for B1. Similarly we have
2 0 1 0 1 B= 0,0,1,1
and so choosing any two non-zero vectors will form a basis. Now in general we note that we can build up a basis {ei} by arbitrarily (independently) choosing the first i − 1 entries, then setting the i-th entry to 1 and all higher entries to 0.
351
G.58 Eigenvalues and Eigenvectors: Worked Example
Lets consider a linear transformation
L : V −→ W
where a basis for V is the pair of vectors {→,↑} and a basis for W is given by some other pair of vectors {↗,↖}. (Don’t be afraid that we are using arrows instead of latin letters to denote vectors!) To test your understanding, see if you know what dimV and dim W are. Now suppose that L does the following to the basis vectors in V
will be a row vector whose entries are vectors).
L(→) L(↑)=a↗+ c↖ b↗+ d↖.
Now we rewrite the right hand side as a matrix acting from the
right on the basis vectors in W:
ab
L(→) L(↑) = ↗ ↖ c d .
The matrix on the right is the matrix of L with respect to this pair of bases.
We can also write what happens when L acts on a general vector v∈V. Such a v can be written
v=x→+y↑.
First we compute L acting on this using linearity of L
L(v)=L(x→ + y↑)
and then arrange this as a row vector (whose entries are vectors)
times a column vector of numbers
x L(v)= L(→) L(↑) y .
352
LL
→ → a↗+ c↖=:L(→), ↑ → b↗+ d↖=:L(↑).
Now arrange L acting on the basis vectors in a row vector (this
Now we use our result above for the row vector(L(→) L(↑)) and obtain a bx
L(v)= ↗ ↖ c d y .
Finally, as a fun exercise, suppose that you want to make a
change of basis in W via
↗=→+↑ and ↖= −→+↑.
Can you compute what happens to the matrix of L?
353
G.59 Eigenvalues and Eigenvectors: 2 × 2 Example Here is an example of how to find the eigenvalues and eigenvectors
of a 2×2 matrix.
4 2 M=13.
Remember that an eigenvector v with eigenvalue λ for M will be a vector such that Mv = λv i.e. M(v) − λI(v) = ⃗0. When we are talking about a nonzero v then this means that det(M − λI) = 0. We will start by finding the eigenvalues that make this statement true.
4 2 λ 0 4−λ 2 det(M−λI)=det 1 3 − 0 λ =det 1 3−λ
After computing this determinant det(M − λI) = (4 − λ)(3 − λ) − 2 · 1 we set this equal to zero to find values of λ that make this true.
(4−λ)(3−λ)−2·1=10−7λ+λ2 =(2−λ)(5−λ)=0
Which means that λ = 2 and λ = 5 are solutions. Now if we want to find the eigenvectors that correspond to these values we look at
vectors v such that
4−λ 2
1 3−λ v=0
For λ = 5
4−5 2 x −1 2x
1 3−5 y = 1 −2 y =0,
This gives us the equalities −x+2y = 0 and x−2y = 0 which both 1 2
give the line y = 2 x. Any point on this line, so for example 1 is an eigenvector with eigenvalue λ = 5.
Now lets find the eigenvector for λ = 2 4−2 2 x 2 2x
1 3−2 y = 1 1 y =0,
which gives the equalities 2x+2y = 0 and x+y = 0. This means
x 1
anyvectorv= y wherey=−x,suchas −1 ,oranyscalar
354
multiple of this vector , ie any vector on the line y = −x is an eigenvector with eigenvalue 2. This solution could be written neatly as
2 1 λ1=5,v1= 1 andλ2=2,v2= −1 .
355
G.60 Eigenvalues and Eigenvectors: Jordan Cells
Consider the matrix
λ 1 J2 = 0 λ ,
and we note that we can just read off the eigenvector e1 with eigenvalue λ. However the characteristic polynomial of J2 is PJ2 (μ) = (μ − λ)2 so the only possible eigenvalue is λ, but we claim it does not have a second eigenvector v. To see this, we require that
λv1 + v2 = λv1 λv2 = λv2
which clearly implies that v2 = 0. This is known as a Jordan 2- cell, and in general, a Jordan n-cell with eigenvalue λ is (similar to) the n × n matrix
λ 1 0 ··· 0 0 λ 1 … 0
….. Jn = . .. .. .. .
0 ··· 0 λ 1 0···00λ
which has a single eigenvector e1. Now consider the following matrix
3 1 0 M=0 3 1
002
and we see that PM(λ) = (λ−3)2(λ−2). Therefore for λ = 3 we need
to find the solutions to (M − 3I3)v = 0 or in equation form:
v2 = 0
v3 = 0 −v3 = 0,
356
and we immediately see that we must have V = e1. Next for λ = 2, we need to solve (M −2I3)v = 0 or
v1 + v2 = 0 v2 + v3 = 0
0 = 0,
and thus we choose v1 = 1, which implies v2 = −1 and v3 = 1. Hence this is the only other eigenvector for M.
This is a specific case of Problem 20.5.
357
G.61 Eigenvalues and Eigenvectors II: Eigenvalues
Eigenvalues and eigenvectors are extremely important. In this video we review the theory of eigenvalues. Consider a linear transformation
L : V −→ V
where dimV = n < ∞. Since V is finite dimensional, we can represent L by a square matrix M by choosing a basis for V .
So the eigenvalue equation
becomes
Lv = λv
Mv = λv,
where v is a column vector and M is an n×n matrix (both expressed in whatever basis we chose for V). The scalar λ is called an eigenvalue of M and the job of this video is to show you how to find all the eigenvalues of M.
The first step is to put all terms on the left hand side of the equation, this gives
(M − λI)v = 0 .
Notice how we used the identity matrix I in order to get a matrix
times v equaling zero. Now here comes a VERY important fact Nu=0 and u̸=0 ⇐⇒ detN =0.
I.e., a square matrix can have an eigenvector with vanishing eigenvalue if and only if its determinant vanishes! Hence
det(M − λI) = 0.
358
The quantity on the left (up to a possible minus sign) equals the so-called characteristic polynomial
PM (λ) := det(λI − M ) .
It is a polynomial of degree n in the variable λ. To see why, try
a simple 2 × 2 example
a b λ 0 a−λ b
det c d − 0 λ =det c d−λ =(a−λ)(d−λ)−bc,
which is clearly a polynomial of order 2 in λ. For the n × n case, the order n term comes from the product of diagonal matrix elements also.
There is an amazing fact about polynomials called the funda- mental theorem of algebra: they can always be factored over complex numbers. This means that degree n polynomials have n complex roots (counted with multiplicity). The word can does not mean that ex- plicit formulas for this are known (in fact explicit formulas can only be give for degree four or less). The necessity for complex numbers is easily seems from a polynomial like
z2 + 1
whose roots would require us to solve z2 = −1 which is impossible
for real number z. However, introducing the imaginary unit i with i2 =−1,
we have
z2 +1=(z−i)(z+i).
Returning to our characteristic polynomial, we call on the funda-
mental theorem of algebra to write
PM (λ) = (λ − λ1)(λ − λ2) · · · (λ − λn) .
The roots λ1, λ2,...,λn are the eigenvalues of M (or its underly-
ing linear transformation L).
359
G.62 Eigenvalues and Eigenvectors II: Eigenspaces
Consider the linear map
−4 6 6 L=0 2 0.
−3 3 5 Direct computation will show that we have
where
Therefore the vectors
−1 0 0 L=Q0 2 0Q−1
002
2 1 1 Q=0 0 1.
110
1 1
v(2) = 0 v(2) = 1 12
10
span the eigenspace E(2) of the eigenvalue 2, and for an explicit example, if we take
we have
1 v = 2v(2) − v(2) = −1
2 2
Lv = −2 = 2v 4
12
so v ∈ E(2). In general, we note the linearly independent vectors v(λ) with the same eigenvalue λ span an eigenspace since for any
v = civ(λ), we have ii
Lv = ciLv(λ) = ciλv(λ) = λciv(λ) = λv. iii
iii
360
i
G.63 Eigenvalues and Eigenvectors II: Hint
We are looking at the matrix M, and a sequence of vectors starting x(0)
with v(0) = y(0) and defined recursively so that x(1) x(0)
v(1) = y(1) = M y(0)
. We first examine the eigenvectors and eigenvalues of
3 2 M=23.
We can find the eigenvalues and vectors by solving
for λ.
By computing the determinant and solving for λ we can find the eigenvalues λ = 1 and 5, and the corresponding eigenvectors. You should do the computations to find these for yourself.
When we think about the question in part (b) when we want to find a vector v(0) such that v(0) = v(1) = v(2) . . . we are looking for a vector that satisfies v = Mv. Consider that eigenvectors are a possibility, but think what would the eigenvalue have to be? If you found a v(0) with this property would cv(0) for a scalar c also work? Remember that eigenvectors have to be nonzero, so what if c = 0?
For part (c) if we tried an eigenvector would we have restric- tions on what the eigenvalue should be? Think about what it means to be pointed in the same direction.
361
det(M − λI) = 0 3−λ 2
det 2 3−λ =0
G.64 Diagonalization: Derivative Is Not Diagonaliz- able
First recall that the derivative operator is linear and that we can write it as the matrix
0 1 0 0 ···
can be written as
0 1 0 0 D=0 0 2 0.
0 0 0 3 0000
0 0 2 0 ··· =0 0 0 3 ···.
d
dx . . . . ..
.... .
We note that this transforms into an infinite Jordan cell with eigenvalue 0 or
0 1 0 0 ···
0 0 1 0 ···
0 0 0 1 ···
. . . . .. .... .
which is in the basis {n−1xn}n (where for n = 0, we just have 1). Therefore we note that 1 (constant polynomials) is the only eigen- vector with eigenvalue 0 for polynomials since they have finite degree, and so the derivative is not diagonalizable. Note that we are ignoring infinite cases for simplicity, but if you want to consider infinite terms such as convergent series or all formal power series where there is no conditions on convergence, there are many eigenvectors. Can you find some? This is an example of how things can change in infinite dimensional spaces.
For a more finite example, consider the space PC3 of complex polynomials of degree at most 3, and recall that the derivative D
You can easily check that the only eigenvector is 1 with eigenvalue 0 since D always lowers the degree of a polynomial by 1 each time it is applied. Note that this is a nilpotent matrix since D4 = 0, but the only nilpotent matrix that is ‘‘diagonalizable’’ is the 0 matrix.
362
G.65 Diagonalization: Change of Basis Example
This video returns to our first example of a barrel filled with fruit
as a demonstration of changing basis.
Since this was a linear systems problem, we can try to represent
what’s in the barrel using a vector space. The first representa- tion was the one where (x, y) = (apples, oranges):
Oranges
(x,y)
Apples
Calling the basis vectors ⃗e1 := (1, 0) and ⃗e2 := (0, 1), this represen- tation would label what’s in the barrel by a vector
x ⃗x:=x⃗e1+y⃗e2=⃗e1 ⃗e2 y .
Since this is the method ordinary people would use, we will call this the ‘‘engineer’s’’ method!
363
But this is not the approach nutritionists would use. They would note the amount of sugar and total number of fruit (s,f):
fruit
sugar
WARNING: To make sense of what comes next you need to allow for the possibity of a negative amount of fruit or sugar. This would be just like a bank, where if money is owed to somebody else, we can use a minus sign.
The vector ⃗x says what is in the barrel and does not depend which
mathematical description is employed. The way nutritionists label
⃗x is in terms of a pair of basis vectors f⃗ and f⃗ : 12
⃗ ⃗ ⃗ ⃗s ⃗x=sf1+ff2=f1f2 f.
Thus our vector space now has a bunch of interesting vectors:
The vector ⃗x labels generally the contents of the barrel. The vector ⃗e1 corresponds to one apple and one orange. The vector ⃗e2 is
364
(s,f)
one orange and no apples. The vector f⃗ means one unit of sugar and 1
zero total fruit (to achieve this you could lend out some apples and keep a few oranges). Finally the vector f⃗ represents a total
of one piece of fruit and no sugar.
You might remember that the amount of sugar in an apple is called
λ while oranges have twice as much sugar as apples. Thus s=λ(x+2y)
f=x+y.
Essentially, this is already our change of basis formula, but lets play around and put it in our notations. First we can write this
as a matrix
s λ 2λx
f=11y. We can easily invert this to get
x −λ1 2s y = 1 −1 f .
λ
Putting this in the engineer’s formula for ⃗x gives
−1 2s1 s ⃗x = ⃗e 1 ⃗e 2 1 λ − 1 f = − λ ⃗e 1 − ⃗e 2 2 ⃗e 1 − 2 ⃗e 2 f .
λ
Comparing to the nutritionist’s formula for the same object ⃗x we
learn that
f⃗=−1⃗e−⃗e and f⃗=2⃗e−2⃗e. 1λ12212
Rearranging these equation we find the change of base matrix P from the engineer’s basis to the nutritionist’s basis:
⃗ ⃗ −λ1 2 f1f2=⃗e1⃗e2 1−1=:⃗e1⃗e2P.
λ
We can also go the other direction, changing from the nutrition- ist’s basis to the engineer’s basis
⃗⃗λ2λ⃗⃗
⃗e1 ⃗e2 = f1 f2 1 1 =: f1 f2 Q.
365
2
Of course, we must have
Q = P−1 ,
(which is in fact how we constructed P in the first place). Finally, lets consider the very first linear systems problem, where you were given that there were 27 pieces of fruit in total
and twice as many oranges as apples. In equations this says just x+y=27 and 2x−y=0.
But we can also write this as a matrix system
where
Note that
1 1 M := 2 −1 ,
MX = V x
X := y
⃗x = ⃗e 1 ⃗e 2 X .
0 V := 27 .
Also lets call
⃗v := ⃗e1 ⃗e2 V .
the basis of the engineers. Lets convert it to the basis of the
nutritionists:
⃗ ⃗ s s ⃗e 1 s L ⃗x = L f 1 f 2 f = L ⃗e 1 ⃗e 2 P f = ⃗e M P f .
2
Note here that the linear transformation on acts on vectors -- these are the objects we have written with a ⃗ sign on top of them. It does not act on columns of numbers!
Now the matrix M is the matrix of some linear transformation L in
We can easily compute MP and find
1 1−λ1 2 0 1
MP=2−1 λ1 −1=−λ35.
Note that P−1MP is the matrix of L in the nutritionists basis,
but we don’t need this quantity right now.
366
Thus the last task is to solve the system, lets solve for sugar and fruit. We need to solve
s 0 1s 27 MP f = −λ3 5 f = 0 .
This is solved immediately by forward substitution (the nutrition- ists basis is nice since it directly gives f):
f=27 and s=45λ.
367
G.66 Diagonalization: Diagionalizing Example
Lets diagonalize the matrix M from a previous example Eigenvalues and Vectors: Example
4 2 M=13
We found the eigenvalues and eigenvectors of M, our solution was 2 1
λ1=5,v1= 1 andλ2=2,v2= −1 .
So we can diagonalize this matrix using the formula D = P−1MP
where P = (v1, v2). This means
2 1 −1 11 1
P= 1 −1 andP =−3 1 −2
The inverse come from the formula for inverses of 2 × 2 matrices:
a b−1 1 d −b
c d =ad−bc −c a ,solongasad−bc̸=0.
So we get:
11 14 22 1 5 0 D=−3 1 −2 1 3 1 −1 = 0 2
But this doesn’t really give any intuition into why this hap- pens. Let look at what happens when we apply this matrix D =
−1 x
P MP to a vector v = y . Notice that applying P translates
x
v= y intoxv1+yv2.
368
−1 x −1 2x+y P MP y = P M x−y
−1 2x y = P M[ x + −y ]
−1 2 1 = P [(x)M 1 +(y)M −1 ]
= P−1[(x)Mv1 + (y) · Mv2]
Remember that we know what M does to v1 and v2, so we get
P−1[(x)Mv1 + (y)Mv2]
−1
= P−1[(xλ1)v1 + (yλ2)v2] = (5x)P −1v1 + (2y)P −1v2
1 0 = (5x) 0 +(2y) 1
5x = 2y
1 converts v1 and v2 back in to 0
−1
Notice that multiplying by P 0
and 1 respectively. This shows us why D = P diagonal matrix:
λ1 0 5 0 D=0λ=02
2
MP should be the
369
G.67 Orthonormal Bases: Sine and Cosine Form All Orthonormal Bases for R2
We wish to find all orthonormal bases for the space R2, and they are {eθ1, eθ2} up to reordering where
θ cosθ θ −sinθ e1= sinθ , e2= cosθ ,
for some θ ∈ [0, 2π). Now first we need to show that for a fixed θ that the pair is orthogonal:
Also we have
eθ1 eθ2 =−sinθcosθ+cosθsinθ=0. ∥eθ1∥2 =∥eθ2∥2 =sin2θ+cos2θ=1,
and hence {eθ1,eθ2} is an orthonormal basis. To show that every or- thonormal basis of R2 is {eθ1,eθ2} for some θ, consider an orthonormal basis {b1 , b2 } and note that b1 forms an angle φ with the vector e1 (which is e01). Thus b1 = eφ1 and if b2 = eφ2 , we are done, otherwise b2 = −eφ2 and it is the reflected version. However we can do the same thing except starting with b2 and get b2 = eψ1 and b1 = eψ2 since we have just interchanged two basis vectors which corresponds to a reflection which picks up a minus sign as in the determinant.
-sin θ cos θ
cos θ sin θ
θ
370
G.68 Orthonormal Bases: Hint for Question 2, Lec- ture 21
This video gives a hint for problem 2 in lecture 21. You are asked to consider an orthogonal basis {v1, v2, . . . vn}. Because this is a basis any v ∈ V can be uniquely expressed as
v = c1v1 + c2v2 + · · · + vncn ,
and the number n = dim V . Since this is an orthogonal basis
vi vj =0, i̸=j.
So different vectors in the basis are orthogonal:
However, the basis is not orthonormal so we know nothing about the lengths of the basis vectors (save that they cannot vanish).
To complete the hint, lets use the dot product to compute a formula for c1 in terms of the basis vectors and v. Consider
v1 v=c1v1 v1 +c2v1 v2 +···+cnv1 vn =c1v1 v1. Solving for c1 (remembering that v1 v1 ̸= 0) gives
c1=v1 v. v1 v1
This should get you started on this problem.
371
G.69 Orthonormal Bases: Hint
This video gives a hint for problem 3 in lecture 21. (a) Is the vector v⊥ =v− u·vu in the plane P?
u·u
Remember that the dot product gives you a scalar not a vector,
so if you think about this formula u·v is a scalar, so this is u·u
a linear combination of v and u. Do you think it is in the span?
(b) What is the angle between v⊥ and u?
This part will make more sense if you think back to the dot product formulas you probably first saw in multivariable cal- culus. Remember that
u · v = ∥u∥∥v∥ cos(θ),
and in particular if they are perpendicular θ = π2 and cos( π2 ) = 0
you will get u·v = 0.
Now try to compute the dot product of u and v⊥ to find ∥u∥∥v⊥∥ cos(θ)
u·v⊥ = u·v−u·vu u·u
= u·v−u·u·vu u·u
= u·v−u·vu·u u·u
Now you finish simplifying and see if you can figure out what θ has to be.
(c) Given your solution to the above, how can you find a third vector perpendicular to both u and v⊥?
Remember what other things you learned in multivariable cal- culus? This might be a good time to remind your self what the cross product does.
372
(d) Construct an orthonormal basis for R3 from u and v.
If you did part (c) you can probably find 3 orthogonal vectors to make a orthogonal basis. All you need to do to turn this into an orthonormal basis is turn those into unit vectors.
(e) Test your abstract formulae starting with u=1 2 0 andv=0 1 1.
Try it out, and if you get stuck try drawing a sketch of the vectors you have.
373
G.70 Gram-Schmidt and Orthogonal Complements: 4× 4 Gram Schmidt Example
Lets do and example of how to "Gram-Schmidt" some vectors in R4. Given the following vectors
o 0 3 1 v1 =1,v2 =1,v3 =0, and v4 =1,
We start with v1
Now the work begins
0 v 1⊥ = v 1 = 1 .
0 1 1 0 0002
v2⊥ = v2−(v1⊥·v2)v1⊥ ∥ v 1⊥ ∥ 2
0 0
= 1−11
1 10 00
0
= 0
1 0
0 0
This gets a little longer with every step.
v3⊥ = v3 − (v1⊥ · v3)v1⊥ − (v2⊥ · v3)v2⊥ ∥v1⊥ ∥2 ∥v2⊥ ∥2
3 0 0 3 = 0−01−10=0
1 1 0 1 1 0 0000
374
This last step requires subtracting off this projection term which take the form u·v u for each of the previously defined basis vec-
tors.
u·u
v4⊥ = =
=
v4 − (v1⊥ · v4)v1⊥ − (v2⊥ · v4)v2⊥ − (v3⊥ · v4)v3⊥ ∥v1⊥ ∥2 ∥v2⊥ ∥2 ∥v3⊥ ∥2
1 0 0 3 1− 11− 00− 30
0 1 0 1 1 9 0 2000
0 0
0 2
Now v1⊥ ,
with very, very nice looking vectors we end up having to do quite a bit of arithmetic. This a good reason to use programs like matlab to check your work.
v2⊥ ,
v3⊥ ,
and v4⊥ are an orthogonal basis. Notice
that even
375
G.71 Gram-Schmidt and Orthogonal Complements: Overview
This video depicts the ideas of a subspace sum, a direct sum and an orthogonal complement in R3. Firstly, lets start with the subspace sum. Remember that even if U and V are subspaces, their union U ∪ V is usually not a subspace. However, the span of their union certainly is and is called the subspace sum
U+V =span(U∪V).
You need to be aware that this is a sum of vector spaces (not
vectors). A picture of this is a pair of planes in R3:
Here U + V = R3.
Next lets consider a direct sum. This is just the subspace sum
for the case when U ∩V = {0}. For that we can keep the plane U but must replace V by a line:
Taking a direct sum we again get the whole space, U ⊕ V = R3.
Now we come to an orthogonal complement. There is not really a notion of subtraction for subspaces but the orthogonal complement comes close. Given U it provides a space U⊥ such that the direct
sum returns the whole space:
U⊕U⊥ =R3. 376
The orthogonal complement U⊥ is the subspace made from all vectors perpendicular to any vector in U. Here, we need to just tilt the line V above until it hits U at a right angle:
Notice, we can apply the same operation to U⊥ and just get U back again, i.e.
U⊥⊥ =U.
377
G.72 Gram-Schmidt and Orthogonal Complements: QR Decomposition Example
We can alternatively think of the QR decomposition as performing the Gram-Schmidt procedure on the column space, the vector space of the column vectors of the matrix, of the matrix M. The re- sulting orthonormal basis will be stored in Q and the negative of the coefficients will be recorded in R. Note that R is upper triangular by how Gram-Schmidt works. Here we will explicitly do an example with the matrix
1 1 −1 M=m1 m2 m3=0 1 2.
noting that and ∥t2∥ = r2 =
√ m′1 m′1=∥m′1∥2=1
−1 1 1
where ∥m1∥ = r1 = √2 which
√2 0 0 Q1=0 1 2, R1=0 10.
−√1 1 1 0 0 1 2
First we normalize m1 to get m′1 = m1 ∥m1 ∥
gives the decomposition
Next we find
√1 1 −1 2
t2 =m2 −(m′1 m2)m′1 =m2 −r21m′1 =m2 −0m′1
3, and so we get m′2 = t2 ∥t2 ∥
with the decomposition
√2 0 0 √
√1 √1 −1 23
Q2=0 √1 2, R2=0 30. 3
−√1 √1 1 0 0 1 23
Finally we calculate
t3 = m3 − (m′1 m3)m′1 − (m′2 m3)m′2
= m 3 − r 31 m ′1 − r 32 m ′2 = m 3 + √ 2 m ′1 − √2 m ′2 , 3
378
again noting m′ m′ =∥m′∥=1, and let m′ = t3 where ∥t ∥=r3 = 222 3∥t3∥33
223. Thus we get our final M = QR decomposition as
√1 √1−√1 √20−√2
232 √2
1203√ Q=0 √3 3, R= 3.
−√1 1 −√1 236
0 0 223
379
G.73 Gram-Schmidt and Orthogonal Complements: Hint for Problem 1
This video shows you a way to solve problem 1 that’s different to the method described in the Lecture. The first thing is to think
of
v1 =−1, v2 =2, v3 =0. −1 −2 2
Then you need to remember that we are searching for a decomposition
M = QR
where Q is an orthogonal matrix. Thus the upper triangular matrix R = QT M and QT Q = I. Moreover, orthogonal matrices perform rotations. To see this compare the inner product u v = uT v of vectors u and v with that of Qu and Qv:
(Qu) (Qv) = (Qu)T(Qv) = uTQTQv = uTv = u v.
Since the dot product doesn’t change, we learn that Q does not change angles or lengths of vectors.
Now, here’s an interesting procedure: rotate v1 , v2 and v3 such that v1 is along the x-axis, v2 is in the xy-plane. Then if you put these in a matrix you get something of the form
a b c 0 d e 00f
which is exactly what we want for R! Moreover, the vector
a 0
0 380
as a set of 3 vectors
1 0 2 M=−1 2 0
−1 2 2
0 0 2
is the rotated v1 so must have length ||v1|| =
√√
3. Thus a = 3.
The rotated v2 is
b d
0
√
and must have length ||v2|| = 2 2. Also the dot product between
a b 0 and d
00
is ab and must equal v1 v2 = 0. (That v1 and v2 were orthogonal is just a coincidence here... .) Thus b = 0. So now we know most of
the matrix R
√3 0 c √
R=0 2 2 e. 00f
You can work out the last column using the same ideas. Thus it only remains to compute Q from
Q = MR−1 .
381
G.74 Diagonalizing Symmetric Matrices: 3 × 3 Exam- ple
Lets diagonalize the matrix
1 2 0 M=2 1 0
005
If we want to diagonalize this matrix, we should be happy to see that it is symmetric, since this means we will have real eigenval- ues, which means factoring won’t be too hard. As an added bonus if we have three distinct eigenvalues the eigenvectors we find will be orthogonal, which means that the inverse of the matrix P will be easy to compute. We can start by finding the eigenvalues of this
1−λ2 0 1−λ0 det2 1−λ 0=(1−λ)0 5−λ
005−λ
2 0 2 1 − λ −(2)0 5−λ+00 0
= (1−λ)(1−λ)(5−λ)+(−2)(2)(5−λ)+0
= (1−2λ+λ2)(5−λ)+(−2)(2)(5−λ)
= ((1−4)−2λ+λ2)(5−λ)
= (−3−2λ+λ2)(5−λ)
= (1+λ)(3−λ)(5−λ)
So we get λ = −1, 3, 5 as eigenvectors. First find v1 for λ1 = −1
x 2 2 0 x 0 (M+I)y=2 2 0y=0,
z006z0
implies that 2x + 2y = 0 and 6z = 0,which means any multiple of
1
v1 = −1 is an eigenvector with eigenvalue λ1 = −1. Now for v2 0
382
with λ2 = 3
x −2 2 0 x 0
(M−3I)y=2 −2 0y=0, z004z0
1
and we can find that that v2 = 1 would satisfy −2x + 2y = 0,
2x−2y=0 and 4z=0. Now for v3 with λ3 =5
0
x −4 2 0 x 0
(M−5I)y=2 −4 0y=0, z000z0
Now we want v3 to satisfy −4x+2y = 0 and 2x−4y = 0, which imply x = y = 0, but since there are no restrictions on the z coordinate
0
so v3 = 0. 1
Notice that the eigenvectors form an orthogonal basis. We can create an orthonormal basis by rescaling to make them unit vec-
tors. This will help us because if P = [v1 , v2 , v3 ] is created orthonormal vectors then P −1 = P T , which means computing should be easy. So lets say
from
P −1
√1√1 0 22
1√22√2 3 v=−1 ,v=1 ,andv=0
00
1
so we get
So when we compute D = P−1MP we’ll get
√1 − √1 0 1 2 0 √1 √1 0 − 1 0 0
22 22
√1 √1 0 2 5 0 − √1 √1 0 = 0 3 0 22 22
001005001005 383
√1 √1 0 √1 − √1 22 22
0
P = −√1 √1 0 and P−1 = √1 √1 22 22
0 001 001
G.75 Diagonalizing Symmetric Matrices: Hints for Prob- lem 1
For part (a), we can consider any complex number z as being a
vector in R2 where complex conjugation corresponds to the matrix
1 0
0 −1 . Can you describe zz ̄ in terms of ∥z∥? For part (b), think
about what values a ∈ R can take if a = −a? Part (c), just compute it and look back at part (a).
For part (d), note that x†x is just a number, so we can divide by it. Parts (e) and (f) follow right from definitions. For part (g), first notice that every row vector is the (unique) transpose of a column vector, and also think about why (AAT )T = AAT for any matrix A. Additionally you should see that xT = x† and mention this. Finally for part (h), show that
x†Mx x†MxT x†x = x†x
and reduce each side separately to get λ = λ.
384
G.76 Kernel, Range, Nullity, Rank: Invertibility Con- ditions
Here I am going to discuss some of the conditions on the invert- ibility of a matrix stated in Theorem 24.6. Condition 1 states that X = M−1V uniquely, which is clearly equivalent to 4. Sim- ilarly, every square matrix M uniquely corresponds to a linear transformation L: Rn → Rn, so condition 3 is equivalent to con- dition 1.
Condition 6 implies 4 by the adjoint construct the inverse, but the converse is not so obvious. For the converse (4 implying 6), we refer back the proofs in Chapter 18 and 19. Note that if det M = 0, there exists an eigenvalue of M equal to 0, which implies M is not invertible. Thus condition 8 is equivalent to conditions 4, 5, 9, and 10.
The map M is injective if it does not have a null space by definition, however eigenvectors with eigenvalue 0 form a basis for the null space. Hence conditions 8 and 14 are equivalent, and 14, 15, and 16 are equivalent by the Dimension Formula (also known as the Rank-Nullity Theorem).
Now conditions 11, 12, and 13 are all equivalent by the defi- nition of a basis. Finally if a matrix M is not row-equivalent to the identity matrix, then det M = 0, so conditions 2 and 8 are equivalent.
385
G.77 Kernel, Range, Nullity, Rank: Hint for 1
Lets work through this problem.
Let L : V → W be a linear transformation. Show that ker L = {0V }
if and only if L is one-to-one:
1. First, suppose that ker L = {0V }. Show that L is one-to-one.
Remember what one-one means, it means whenever L(x) = L(y) we can be certain that x = y. While this might seem like a weird thing to require this statement really means that each vector in the range gets mapped to a unique vector in the range.
We know we have the one-one property, but we also don’t want to forget some of the more basic properties of linear transfor- mations namely that they are linear, which means L(ax + by) = aL(x) + bL(y) for scalars a and b.
What if we rephrase the one-one property to say whenever L(x)− L(y) = 0 implies that x−y = 0? Can we connect that to the statement that ker L = {0V }? Remember that if L(v) = 0 then v ∈ ker L = {0V }.
2. Now, suppose that L is one-to-one. Show that ker L = {0V }. That is, show that 0V is in kerL, and then show that there are no other vectors in kerL.
What would happen if we had a nonzero kernel? If we had some vector v with L(v) = 0 and v ̸= 0, we could try to show that this would contradict the given that L is one-one. If we found x and y with L(x)=L(y), then we know x=y. But if L(v)=0 then L(x) + L(v) = L(y). Does this cause a problem?
386
G.78 Least Squares: Hint for Problem 1
Lets work through this problem. Let L : U → V be a linear trans- formation. Suppose v ∈ L(U) and you have found a vector ups that obeys L(ups) = v.
Explain why you need to compute kerL to describe the solution space of the linear system L(u) = v.
Remember the property of linearity that comes along with any linear transformation: L(ax + by) = aL(x) + bL(y) for scalars a and b. This allows us to break apart and recombine terms inside the transformation.
Now suppose we have a solution x where L(x) = v. If we have an vector y ∈ kerL then we know L(y) = 0. If we add the equations together L(x) + L(y) = L(x + y) = v + 0 we get another solution for free. Now we have two solutions, is that all?
387
G.79 Least Squares: Hint for Problem 2
For the first part, what is the transpose of a 1 × 1 matrix? For the other two parts, note that v v = vT v. Can you express this in terms of ∥v∥? Also you need the trivial kernel only for the last part and just think about the null space of M. It might help to substitute w = Mx.
388
H Student Contributions
Here is a collection of useful material created by students. The copyright to this work belongs to them as does responsibility for the correctness of any information therein.
4D TIC TAC TOE by Davis Shih.
A hint for review problem 1, lecture 2 by Ashley Coates.
A hint for review problem 1, lecture 12 by Philip Digiglio.
Some cartoons depicting matrix multiplication by Asun Oka. An eigenvector example for lecture 18 by Ashley Coates.
389
I Other Resources
Here are some suggestions for other places to get help with Linear Algebra:
Strang’s MIT Linear Algebra Course. Videos of lectures and more:
http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/
The Khan Academy has thousands of free videos on a multitude of topics including linear algebra:
http://www.khanacademy.org/
The Linear Algebra toolkit: http://www.math.odu.edu/∼bogacki/lat/
Carter, Tapia and Papakonstantinou’s online linear algebra resource http://ceee.rice.edu/Books/LA/index.html
S.O.S. Mathematics Matrix Algebra primer: http://www.sosmath.com/matrix/matrix.html
The numerical methods guy on youtube. Lots of worked examples: http://www.youtube.com/user/numericalmethodsguy
Interactive Mathematics. Lots of useful math lessons on many topics: http://www.intmath.com/
Stat Trek. A quick matrix tutorial for statistics students: http://stattrek.com/matrix-algebra/matrix.aspx
Wolfram’s Mathworld. An online mathematics encyclopædia: 390
http://mathworld.wolfram.com/
Paul Dawkin’s online math notes http://tutorial.math.lamar.edu/
Math Doctor Bob: http://www.youtube.com/user/MathDoctorBob?feature=watch
Some pictures of how to rotate objects with matrices: http://people.cornellcollege.edu/dsherman/visualize-matrix.html
xkcd. Geek jokes:
http://xkcd.com/184/
See the bridge actually fall down:
http://anothermathgeek.hubpages.com/hub/What-the-Heck-are-Eigenvalues-and-Eigenvectors
391
J List of Symbols
∈ “Is an element of”.
“Is equivalent to”, see equivalence relations. Also, “is row equivalent to” for matrices.
R The real numbers.
In The n × n identity matrix.
PnF The vector space of polynomials of degree at most n with coefficients in the field F.
∼
392
Index
Action, 308
Algebra, 261
Angle between vectors, 47 Anti-symmetric matrix, 79 Augmented matrix 2 × 2, 19
Back substitution, 89 Basis, 139
concept of, 124
example of, 136 Bit matrices, 84
Block matrix, 72 Bounded operator, 258
Calculus Superhero, 208 Canonicalbasis,seealsoStandardba-
sis, 348
Captain Conundrum, 11, 51, 207 Cauchy–Schwarz inequality, 48 Change of basis, 166
Change of basis matrix, 167 Characteristic polynomial, 114, 156,
159 Closure, 125
additive, 53
multiplicative, 53 Cofactor, 120
Column vector, 63 Components of a vector, 148 Conic sections, 237 Conjugation, 169
Cramer’s rule, 122
Determinant, 98
2 × 2 matrix, 96 3 × 3 matrix, 96
Diagonal matrix, 66 Diagonalizable, 165 Diagonalization, 165
concept of, 154 Dimension, 139
concept of, 61
notion of, 124 Dimension formula, 200 Direct sum, 186
Dot product, 47
Dual space, 258
Dual vector space, 146, 249 Dyad, 174
Eigenspace, 162 Eigenvalue,154,159
multiplicity of, 160 Eigenvector, 154, 159 Einstein, Albert, 41 Elementary matrix, 100
swapping rows, 101 Elementary row operations, 27 Elite NASA engineers, 234 Equivalence relation, 171 Euclidean length, 46
Even permutation, 97 Expansion by minors, 116
Fibonacci numbers, 253
Field, 259
Forward substitution, 89 Fundamental theorem of algebra, 159
Galois, 56, 261
Gauss–Jordan elimination, 27 Gaussian elimination, 27
393
General solution, 39
Golden ratio, 239
Goofing up, 82
Gram–Schmidt orthogonalization pro-
cedure, 183 Graph theory, 64
Group, 258
Homogeneous solution an example, 38
Homogeneous system, 39 Hyperplane, 34, 46
Identity matrix, 67 2 × 2, 19
Inner product, 173 Invariant direction, 154 Inverse matrix
concept of, 77 Invertible, 79
Jordan cell, 172, 356
Kernel, 198
Kirchoff’s laws, 231 Kronecker delta, 173, 258
Law of Cosines, 46 Least squares, 206
solutions, 206 Length of a vector, 47 Linear
function, 37
Linear combination, 162
Linear dependence theorem, 133 Linear function, 58
Linear independence
concept of, 124 Linear System
concept of, 14 Linear Transformation
concept of, 15
Linear transformation, 58 Linearity, 58
Linearity property, 37
Linearly dependent, 131 Linearly independent, 131 Lower triangular matrix, 88 Lower unit triangular matrix, 90 LU decomposition, 88
Magnitude, see also Length of a vec- tor
Matrix, 63 diagonal of, 66
entries of, 63
Matrix of a linear transformation, 147 Minimal spanning set, 136
Minor, 116
Multiplicative function, 116
Newton’s Principiæ, 236 Non-leading variables, 35 Nonsingular, 79
Norm, see also Length of a vector Nullity, 200
Odd permutation, 97 Orthogonal, 173
Orthogonal basis, 175 Orthogonal complement, 187 Orthogonal decomposition, 181 Orthogonal matrix, 177 Orthonormal basis, 175
Outer product, 173 Parallelepiped, 122
Particular solution, 39 394
an example, 38 Permutation, 97
Inversion number, 103 Length, 105
Simple transposition, 104
Permutation matrices, 171 “Perp”, 187
Pivot, 22
Projection, 158
QR decomposition, 184 Queen Quandary, 239
Random, 189
Rank, 200
Recursion relation, 239 Reduced row echelon form, 21 Ring, 260
Row equivalence, 21
Row vector, 63
Scalar multiplication n-vectors, 44
Trace, 75
Transpose, 67 Triangle inequality, 49
Upper triangular matrix, 88
Vandermonde determinant, 233 Vector
in R2, 13 Vector addition
n-vectors, 44 Vector space, 53
finitedimensional, 139 Zero vector
n-vectors, 44
Sign function, 98
Similar matrices, 169
Skew-symmetric matrix, see Anti-symmetric
matrix Solution set, 34
set notation, 35 Span, 126
Square matrices, 73 Square matrix, 66 Standard basis, 142, 151 Subspace, 124
notion of, 124, 162 Subspace theorem, 125 Sum of vectors spaces, 185 Symmetric matrix, 67, 191
395