Solutions to Selected Exercises
Problem Set 1.2, page 9
1. The lines intersect at (x, y) = (3, 1). Then 3(column 1) + I(column 2) = (4, 4).
3. These “planes” intersect in a line in four-dimensional space. The fourth plane nor- mally intersects that line in a point. An inconsistent equation like u + w = 5 leaves no solution (no intersection).
5. The two points on the plane are (1, 0, 0, 0) and (0, 1, 0, 0).
7. Solvable for (3, 5, 8) and (1, 2, 3); not solvable for b = (3, 5, 7) orb = (1, 2, 2). 9. Column 3 = 2(column 2) – column 1.Ifb = (0, 0, 0), then (u, v, w) _ (c, -2c, c).
11. Both a = 2 and a = -2 give a line of solutions. All other a give x = 0, y = 0.
13. The row picture has two lines meeting at (4, 2). The column picture has 4(1, 1) + 2(-2, 1) = 4(column 1) + 2(column 2) = right-hand side (0, 6).
15. The row picture shows four lines. The column picture is infour-dimensional space. No solution unless the right-hand side is a combination of the two columns.
17. If x, y, z satisfy the first two equations, they also satisfy the third equation. The line
L of solutions contains v = (1, 1, 0), w = (2, 1, 1), and u = 1v + 1w, and all 222
combinations cv + dw with c + d = 1.
19. Column 3 = column 1; solutions (x, y, z) _ (1, 1, 0) or (0, 1, 1) and you can add any multiple of (-1, 0, 1); b = (4, 6, c) needs c = 10 for solvability.
21. The second plane and row 2 of the matrix and all columns of the matrix are changed. The solution is not changed.
23. u = 0, v = 0; w = 1, because 1(column 3) = b.
Problem Set 1.3, page 15
1. Multiply by £ = z = 5, and subtract to find 2x + 3y = 1 and -6y = 6. Pivots 2, -6.
3. Subtract -1 times equation 1 (or add 1 times equation 1). The new second equation 22
is 3y = 3. Then y = 1 and x = 5. If the right-hand side changes sign, so does the solution: (x, y) = (-5, -1).
5. 6x + 4y is 2 times 3x + 2y. There is no solution unless the right-hand side is 2. 10 = 20. Then all points on the line 3x + 2y = 10 are solutions, including (0, 5) and (4, -1).
7. If a = 2, elimination must fail. The equations have no solution. If a = 0, elimination stops for a row exchange. Then 3y = -3 gives y = -1 and 4x + 6y = 6 gives
x = 3.
9. 6x – 4y is 2 times (3x – 2y). Therefore, we need b2 = 2b1. Then there will be infinitely many solutions. The columns (3, 6) and (-2, -4) are on the same line.
.fl
1-s
r-+
.ti
r-i 1-1 r-1
‘C3
–SIN
11. 2x – 3y = 3
y + z = 1 gives
2y – 3z = 2
2x – 3y = 3 x = 3 y + z = 1 and y= 1 – 5z =0 z = 0
Subtract 2 x row 1 from row 2 Subtract 1 x row 1 from row 3 Subtract 2 x row 2 from row 3.
13. The second pivot position will contain -2 – b. If b = -2, we exchange with row 3. If b = -1 (singular case), the second equation is -y – z = 0. A solution is (1, 1, -1).
15. If row 1 = row 2, then row 2 is zero after the first step; exchange the zero row with row 3 and there is no third pivot. If column 1 = column 2, there is no second pivot.
17. Row 2 becomes 3y – 4z = 5, then row 3 becomes (q + 4)z = t – 5. If q = -4, the system is singular – no third pivot. Then, if t = 5, the third equation is 0 = 0. Choosing z = 1, the equation 3y – 4z = 5 gives y = 3 and equation 1 gives x=-9.
19. The system is singular if row 3 is a combination of rows 1 and 2. From the end view, the three planes form a triangle. This happens if rows 1+ 2 = row 3 on the left-hand side but not the right-hand side: for example, x + y + z = 0, x – 2y – z = 1,
2x – y = 9. No two planes are parallel, but still no solution. 21. The fifth pivot is s . The nth pivot is (“+1)1) .
u+ v+ w= 2 23. Triangular system 2v + 2w = -2 2w= 2
u = 3 Solution v = -2.
w= 1
25. (u, v, w) = (3/2, 1/2,. -3). Change to +1 would make the system singular (2 equal columns).
27. a = 0 requires a row exchange, but the system is nonsingular: a = 2 makes it singular (one pivot, infinity of solutions); a = -2 makes it singular (one pivot, no solution).
29. The second term be + ad is (a + b)(c + d) – ac – bd (only 1 additional multiplication). . ,
31. Elimination fails for a = 2 (equal columns), a = 4 (equal rows), a = 0 (zero column).
Problem Set 1.4, page 26
17 5 1. 4 , -2 ,
4 17 3. [f].
With sides to (2, 1) and (0, 3), the parallelogram goes to (2, 4).
351 3. Inner products 54 and 0, column times row gives -6 -10 -2 21 35 7
5. Ax = (0, 0, 0), so x = (2, 1, 1) is a solution; other solutions are cx = (2c, c, c).
100134134 7. Examples: Diagonal 0 2 0 , symmetric 3 2 0 , triangular 0 2 0 007 407 007
034 skew-symmetric -3 0 0 .
-4 0 0
Solutions to Selected Exercises 429
`-‘
.vi
A’.
‘-n
f°,
,.fl
430 Solutions to Selected Exercises
9. (a) all (b) fil = ail/all
ail
(c) new aid is aid – `a11
all
a21
(d) second pivot a22 – -a12.
all
11. The coefficients of rows of B are 2, 1, 4 from A. The first row of AB is [6 3].
0 1 ],B=[g jc=[? ]D=AE=F=[ 13. A= [1
15. AB1=B1Agivesb=c=0.AB2=B2Agivesa=d.SoA=aI.
17. A(A + B) + B(A + B), (A + B)(B + A), A2 + AB + BA + B2 always equal
(A + B)2.
19b] p q] _ drs
21. AA; B_ 1
[a+ [b[r s] – + br
aq + bsc
c +ds
()C =
d c +dr p
0 o] = zero matrix.
23. E32E21b = (1, -5, -35) but E21E32b = (1, -5, 0). Then row 3 feels no effect from row 1.
25. Changing a33 from 7 to 11 will change the third pivot from 5 to 9. Changing a33 from 7 to 2 will change the pivot from 5 to no pivot.
100
27. To reverse E31i add 7 times row 1 to row 3. The matrix is R31 = 0 1 0 .
701
101 101 201
29. E13= 0 1 0 ; 0 1 0 ; E31 E13 = 0 1 0 . Test on the identity matrix! 001101 101
31. E21 has f21
a+ b+ c= 4 33. a + 2b + 4c = 8 a+3b+9c=14
E32 has 32 = – , E43 has f43 = – .Otherwise the E’s match I .
gives
a=2 b = 1. c=1
35. (a) Each column is E times a column of B. 12124
(b) EB = [1
37. (row 3) x is a3 j xj, and (A2)11 = (row 1) (column 1) =
39. BA=3Iis5by5,AB=5Iis3by3,ABD=5Dis3byl,ABD:No,A(B+C): No.
001
41. (a) B = 41. (b) B = 0. (c) B = 0 1 0 100
(d) Every row of B is 1, 0, 0.
43. (a) mn (every entry). (b) mnp. (c) n3 (th is is n2 dot product s).
1 0 3 3 0 l 0 0 01 3 3 0 45. 2 3 3 0] + 4 [1 2 1] = 6 6 0 + 4[ 8 4] = 10 14 4 21660121781
2 4- 2 4 8Rows
of EB are combinations of rows of B, so they are multiples of [1 2 4].
Ol] [1
-1]. 1
BCD
I-+
MIA
47. A times,B is A
I B, [-] [ I I ],
I I I [_].
1
49. The (2, 2) block S = D – CA-1 B is the Schur complement: blocks in d – (cb/a). 51. A times X = [x1 x2 x3] will be the identity matrix I = [Ax, Axe Ax3].
[a+b a+b ra+c b+d
53. (c + d c + dj agrees with [a + c b + d] when b = c and a = d.
55. 2x + 3y + z + 5t = 8 is Ax = b with the 1 by 4 matrix A = [2 3 1 5]. The solutions x fill a 3D “plane” in four dimensions.
x
57. The dot product [1 4 5] y = (1 by 3)(3 by 1) is zero for points (x, y, z) on a
z
plane x + 4y + 5z = 0 in three dimensions. The columns of A are one-dimensional vectors.
59. A * v = [3 4 5f and v’ * v = 50; v * A gives an error message.
8 61. M= 1 6
3 4 5 9 7 2
5+u 5-u+v 5-u-v 5
5+v 5+u-v
5-v 5+u+v ;
5-u
M3(1, 1, 1) = (15, 15, 15); M4(1, 1, 1, 1) = (34, 34, 34, 34) because the numbers 1 to 16 add to 136, which is 4(34).
Problem Set 1.5, page 39
1. U is nonsingular ‘when no entry on the main diagonal is zero.
1 0 0 1 0 0- 1 0 0 1 0 0 1 0 0
3. 2 1 0 -2, 1 0 = 0 1 0 ; -2 1 0 2 1 0= I also.
-1 1-1 1 -1 1 1 0 0 1 -1 1 1 -1 -1 1
(E-1F j 1G_1)(GFE) -_- E-1F-1FE = E-lE = I; also (GFE)(E-1F-1G-1) =I.
Solutions to Selected Exercises 431
5. LU=
7. FGH=
1 0 0 5 7; after elimination, 3 0 1 0 0 -1
1000 1000 00
00 ;HGF=
021 4210
0021 8421
0 5 7 v 2 0 0 -1 w -1
100233 233U 2
9. (a) Nonsingular when dld2d3 0 0. (b) Suppose d3 ; 0: Solve Lc = b going
l/d3 x= 1/d3 1/d3
d1 -d1 0 u 01 downward: Lc = b gives c = 1. Then 0 d2 -d2 v = 10 gives
[01
1 00d3w1
–r
.–/
NOOK
-+O
432 Solutions to Selected Exercises
25 11. Lc = b going downward gives c = -2 ; Ux = c upward gives x = -2 00
100u2
= -3 010w4
13. Permutation rows 2 and 3
permutation rows 1 and 2
010011 100100101 15. PA =LDUis 1 0 0 1 0 1 = 0 1 0 0 1 0 0 1 1
0 0 1 v ‘
10u 1
-1 . w 1
00v r 0
ol 001,
001 234
100121
PA = LDUis 0 0 1 2 4 2 =
010111
231 00-1001
100100121 1 1 0 0 -1 0 0 1 0 201000000
100
17. L becomes 1 1 0 . MATLAB and other codes use PA = LU.
201
19. a = 4 leads to a row exchange; 3b + lOa = 40 leads to a singular matrix; c = 0 leads to a row exchange; c = 3 leads to a singular matrix.
21. 231 = 1 and 232 = 2 (233 = 1): reverse steps to recover x + 3y + 6z = 11 from
Ux = c: 1 times (x + y + z = 5) + 2 times (y + 2z = 2) + 1 times (z = x+3y+6z=11.
11111
23. 0 1 -02 1 A= 0 2 3 =U.
2) gives
0 1 . 0 0 -6 A= 2 1 0 U=E211E321U=LU.
0 -2 1
100
021
25. 2 by 2: d = 0 not allowed;
1 1 0 1
1 1 2= 2 1
1 2 1 m n l
e g f h
d=1, e=1, then 2=1
f = 0 is not allowed i no pivot in row 2.
248 121
27. A= 0 3 9 hasL=landD= A= L U has U = A (pivots on
007 7
124
the diagonal); A= LDU has U= D-lA = 0 1 3 with is on the diagonal.
001
00’-+
‘-,
r-1
`/’
Sao
moo
°N~
tin
ono
s”
29.
31.
abbb11 abcc 111
a b c d 1 1 1 1-
b-a b-a b-a a
Need b
aaaa1 aaaaa0
d-c d c. 1 1 1 0 ra a 0 a
11
1 1 = LIU; a a+b b = (same L) b 0111 0bb+ccj
( same U).
1 1
1 0 0 4 4 rl 1 1
3 gives x = 0 1 1 1 6 1 00 1 1 1
33. 1 1
A = LU.
101
1
0 c = 5 gives c =
1 x=
35. The 2 by 2 upper submatrix B has the first two pivots 2, 7. Reason: Elimination on A starts in the upper left corner with elimination on B.
11111
12345 37. 1 3 6 10 15 1 4 10 20 35 1 5 15 35 70
1
1 1
1 2 1
‘ 1 3 3
Fl
1 1 1
1 2 3
l
4
Pascal’s triangle in L and U. MATLAB’s lu code will wreck
the pattern. chol does no row exchanges for symmetric matrices with positive pivots.
1 4 6 4 1
1
1 3, 61
1 4
1
39. Each new right-hand side costs only n2 steps compared to n3/3 for full elimination A\b.
41. 2 exchanges; 3 exchanges; 50 exchanges and then 51.
010 100 001 43. P= 0 0 1 ; P1 = 0 0 1 and P2 = 0 1 0 100 010 100
(P2 gives a column exchange).
45. There are n! permutation matrices of order n. Eventually two powers of P must be the same: If Pr = Ps then Pr-s = I. Certainly r – s < n!
LLL P3J L1 00 47. The solution is x = (1, 1, ... , 1). Then x = Px.
lr
P= [P2 is 5 by 5 with P2 = ]'3=[? 0 1
andP6=l.
Problem Set 1.6, page 52
c-b c-b
010
0 1. A11 = 2
1 A21 0 0 ' 1
A31
_' cos 8 [-sin6
sin 01 cos8'
3. A-1 =
C-1; A-1 = U-1L-1 P.
BL
moo
CDR
-F-+
F-'
.--+
,-,
.-a
v.,
G1. 0.N
0.N
434 Solutions to Selected Exercises
5. A(AB) = (move parentheses) = (A2)(B) = I.
7.
11. (a)
10
(b) + [0 01] - [0 1]
(B-' + A-')-' = B(A + B)-'A.
01l 1+
0 -Oll 10 01
01_11
-1 0] - -1 1
[0 0]
1/2
-!/2 1/2 0 1 have A2 = I. 1/2 /3/21' [1 0all
1/2
9. If row 3 of A-1 were (a, b, c, d), then A-1 A = I would give 2a = 0, a + 3b = 0,
4a +f 8b = 1. This has no solution.
13. ATB = 8; BTA = 8; ABT = [2
62 2]; BAT = [6 2]°
15. (a) n(n + 1)/2 entries on and above diagonal. (b) (n - 1)n/2 entries above diagonal.
17. (a) The inverse of a lower (upper) triangular matrix is still lower (upper) triangu-
19.
100100135
3 1 0 0 3 0 0 1 1; 511002001
1 0 a 0 1 b/a [b/a 11 [0 d - (b2/a)] [0 1 ]
lar. Multiplying lower (upper) triangular matrices gives a lower (upper) triangular
1
matrix. (b) The main diagonals of Li L2 D2 and D1 U, U2 1 are the same as those
of D2 and D1 respectively. L- 1L2D2 = D1 U,U2so we have D1 = D2. By com-
1
paring the off-diagonals of L i L2 D2 = D1 U1 U2both matrices must be diagonal.
Li1L2D2 = D2, D1U1U2 1 = Dl, D, is invertible so L11L2 = I, U,UU l = I. Then L, = L2, U1 = U2-
21. From B(I - AB) = (I - BA)B we get (I - BA)-1 = B(I - AB)-1B-1, an explicit inverse provided B and I - AB are invertible. Second approach: If I - BA is not invertible, then BAx = x for some nonzero x. Therefore ABAx = Ax, or ABy = y, and I - AB could not be invertible. (Note that y = Ax is nonzero, from BAx = x.)
23. [y] .2]' [z] °
1].
25.
(a) In Ax = (1, 0, 0), equation 1 + equation 2 - equation 3 is 0 = 1.
.1]
so A-1 = 0
third pivot.
27. If B exchanges rows 1 and 2 of A, then B-1 exchanges columns 1 and 2 of A-'. 29. If A has a column of zeros, so does BA. So BA = I is impossible. There is no A-1.
31.
rl
1 L 1 1 1J 1
1 =[ 0 -1 1]
= E;
F1
[ I -1
1r1 lrl
i rl
then
and changing -1 to +1.
111
1J
I
1]
-LDLT.
(b) The right-hand sides must satisfy b, + b2 = b3. (c) Row 3 becomes a row of zeros-no
1 1 is L = E-',after reversing the order of these 3 elementary matrices
2
00
FW-+
M-+
(p'
33. A * ones(4,1) gives the zero vector, so A cannot be invertible.
13101310 Ol
35. [2 7 0 1 0 1 -2
1 [0
-2 -1]
01] 01 8 [380 [0-3-1]
I a b 1 0 0 1 a 0 1 0 -b
37. 0 1 c 0 1 0 - 0 1 0 0 1 -c
001001 001001
1 0 0 1 -a ac-b 01001 -c 00100 1
0 -1
41. Not invertible for c = 7 (equal columns), c = 2 (equal rows), c = 0 (zero column).
39.
43.
45.
A-']
5 by 5 A-1 also has is on the diagonal and super-
[2 2 0 1
[2 0 -1 O]
0 [o 1
021 0
1 1 OH 01
A-1 = 0 1 1 0 0011
LO 0 0 diagonal.
. The
[-C
[-DACA-1
D 1]>
I0],
and
[-D
01
47. For Ax = b with A = ones(4, 4) = singular matrix and b = ones(4, 1), A\b
will pick x = (1, 0, 0, 0) and pinv(A) * b will pick the shortest solution x = (1, 1, 1, 1)/4.
-3]’ AT = A and
49. AT = [0
then A-1 = c
((AB)-1)’r
3], A-1 = [_1 1/3], (A’-‘)T = (AT)-1 = [1
_1] = (A-‘)T = (AT)-1.
= (B`’A-I)T = (A-1)T(B-I)T; (U`1)T is lower triangular.
[O
51.
53. (a) xTAy = a22 = 5. (b) xT A = [4 5 6]. (c) Ay = [].
55. (Px)T(Py) = xT PT Py = xTy because pTp = I ; usually Px y = X. PT y
0 1 0 1 1, 0 1 0 1 001 21 210011 1003231002
57. PAPT recovers the symmetry.
59. (a) The transpose of RTAR is RTATRTT = RTAR = n by n.
(b) (RTR)jJ = (column j of R) (column j of R) = length squared of column j.
Solutions to Selected Exercises
435
1/20
= [I
.
r-”
woo
r-+
436 Solutions to Selected Exercises
61. Total currents are ATy = Ir-1
L 0 -1 -1] YBs
F 1 0 11 rYBC
YBC + Yas -YBC + Ycs -Ycs – YBS
1 0 Ycs
Either way (Ax)Ty =xT(ATy) =xBYBC+XBYBS-xcysc+xcycs-xsYcs-XSYBS
63. Ax y is the cost of inputs, whereas x ATy is the value of outputs.
65. These are groups: Lower triangular with diagonal Is, diagonal invertible D, and permutations P. Two more: Even permutations, all nonsingular matrices.
67. Reordering the rows and/or columns of [ a d ] will move entry a, not giving [ b
69. Random matrices are almost surely invertible.
71. The -1, 2, -1 matrix in Section 1.7 has A = LDLT with £t i_1 = 1 – i
Problem Set 1.7, page 63
2 -1
-1 2 -1
dI
1.
– 1 2 -1 -1 2
121
3. AO =
1 -1
– 1 2 -1
-1 2 -1
1132 1
21243
-2
3 35 ij -41 4
-12-1 c0 -1 1 c 0
5. (u1, u2, u3) = (72/8, 0, -72/8) instead of the true values (1, 0, -1).
9 -36 30
7. H-‘ = -36 192 -180 30 -180 180
9. The 10 by 10 Hilbert matrix is very ill-conditioned.
11. A large pivot is multiplied by less than 1 in eliminating each entry below it. An
1/2 1/2 1 extreme case, with multipliers = 1 and pivots = 2, 24, is A = -1/2 0 1 .
-1/2 -1 1
Problem Set 2.1, page 73
1. (a) The set of all (u, v), where u and v are ratios p/q of integers. (b) The set of all (u, v), where u = 0 or v = 0.
3. C(A) is the x-axis; N(A) is the line through (1, 1); C(B) is R2; N(B) is the line through (-2, 1, 0); C(C) is the point (0, 0) in R2; the nullspace-N(C) is R3.
1
2
= LDLT
det = 5.
c 0
c 0
. Each row ad ds to 1, so AO c = 0
1 34
3
MIN
.°I .–I
5. Broken rules: (a) 7, 8 (b) 1 (c) 1, 2, 8.
7. (b), (d), (e) are subspaces. Can’t multiply by -1 in (a) and (c). Can’t add in (f).
9. The sum of two nonsingular matrices may be singular (A + (-A)). The sum of two singular matrices may be nonsingular.
11. (a) One possibility: The matrices cA form a subspace not containing B. (b) Yes: the subspace must contain A – B = I.
(c) The subspace of matrices whose main diagonal is all zero.
13. If (f + g) (x) is the usual f (g (x)), then (g + f )x is g(f (x)), which is different. In rule 2 both sides are f (g (h (x))). Rule 4 is broken because there might be no inverse function f -1(x) such that f (f -1(x)) = x. If the inverse function exists, it will be the vector – f .
15. The sum of (4, 0, 0) and (0, 4, 0) is not on the plane; it has x + y – 2z = 8.
17. (a) The subspaces of R2 are R2 itself, lines through (0, 0), and the point (0, 0). (b) The subspaces of R4 are R4 itself, three-dimensional planes n v = 0, two- dimensional subspaces (n1 v = 0 and n2 v = 0), one-dimensional lines through
(0, 0, 0, 0), and (0, 0, 0, 0) alone.
19. The smallest’subspace containing P and L is either P or R3.
21. The column space of A is the x-axis = all vectors (x, 0, 0). The column space of B is the x-y plane = all vectors (x, y, 0). The column space of C is the line of vectors
(x,2x,0).
23. A combination of the columns of C is also a combination of the columns of A (same column space; B has a different column space).
25. The extra column b enlarges the column space, unless b is already in that space:
[A b] _ [0
(larger column space)
0 1 (no solution to Ax = b).
11 0 1] (b already in column space) 0 1 1 (Ax = b has a solution).
27. Column space = R8. Every b is a combination of the columns, since Ax = b is solvable.
110112120
29. A 1 0 0 or 1 0 1 A = 2 4 0 (columns on 1 line). 010011 360
31. R2 contains vectors with two components-they don’t belong to R3.
Problem Set 2.2, page 85
1. x+y+z=1,x+y+z=O.Changinglto0,(x,y,z)=c(-1, 1,0)+d(-1,0, 1).
Solutions to Selected Exercises 437
3. Echelon form U = 10
00
0]’ free variables x1, x3, x4; special solutions
(1, 0, 0, 0), (0, 0, 1, 0), and (0, -3, 0, 1). Consistent when b2 = 2b1. Complete solution (0, bl, 0, 0) plus any combination of special solutions.
E’+
E-,
‘-3
,’a
fit
438 Solutions to Selected Exercises
u -2v – 3 -21 -3-1
7. c = 7 allows u = 1, v = 1, w = 0. T he colu mn s pace is a pla ne.
1
w 2 0_ 2
5.V V =V
9. (a) x = x2
0 + x4
0]
-2 ,for an y x2, x4. Row reduc ed R =
-2 2
1 2 0 -2 0 0 1 2 0000
(b) Complete solution x =
+ x2
0 + x4 -2 , for any x2, x4. 1
L11
ra – 3b1
2 10
1
+ 0- ; no solution!
11. loll has nullspace = line through (-1, 1) but no solution. Any
I 1 1] [x]
=
b = [c] has many particular solutions to Axp = b. c
1010 -1 1 1 1 17 1
(a) r = 1. (c) r= 1.
1 – 1
13. R = 0 0 0 0 ;R = 0 1 0 1 ;R = 0 0 0 0 . (b) r-2.
0 0 0 0
15. A nullspace matrix N = [ 17. I think this is true.
0000 0000
is n by n – r.
0 0 0 0
I]
19. The special solutions are the columns of N =
-2 -3
-1
01
10 a nd N = 0 -2 .
01
21. The r pivot columns of A form an m by r submatrix of rank r, so that matrix A* has r independent pivot rows, giving an r by r invertible submatrix of A. (The pivot rows of A* and A are the same, since elimination is done in the same order-we just don’t see for A* the “free” columns of zeros that appear for A.)
23. (uvT)(wzT) = u(vTw)zT has rank 1 unless vTw = 0.
25. We are given AB = I which has rank n. Then rank(AB) < rank(A) forces rank(A) = n.
27. If R = EA and the same R = E*B, then B = (E*)`IEA. (To get B, reduce A to R and then invert steps back to B.) B is an invertible matrix times A, when they share the same R.
29. Since R starts with r independent rows, RT starts with r independent columns (and
then zeros). So its reduced echelon form is 1 0] where I is r by r. 0
CAD
r.+
31. Ifc=1,R= 0 0 0 0
0000
1022
Ifco1,R= 0 1 0 0
0000
Special solutions in N =
has x2i x3, x4 fre e.
has x3, x4 f ree.
1 1 2 21
1 L
1 -2 2
[5b1 - 2b3] b3 - 2b1 (no free variables). (b) Solvable if b2 = 2b1 and 3b1 - 3b3 + b4 = 0-
5b - 2b3 -1
Then x = b3 - 2b1 + x3
1
33
41. Multiply xP by 2, same x,,; 1XXIP is [ I, special solutions also include the columns ® ..jj
1
of [ j]; xP and the special solutions are not changed.
43. For A, q = 3 gives rank 1, every other q gives rank 2. For B, q = 6 gives rank 1, every other q gives rank 2.
45. (a) r < m, always r < n. (b) r = m, r < n. (c) r < m, r = n. (d) r = m = n.
35. (a) Solvable if b2 = 2b1 and 3b1 - 3b3 + b4 = 0. Then x =
01
37. A 1 by 3 system has at least two free variables.
39. (a) The particular solution xP is always multiplied by 1.
Solutions to Selected Exercises 439
-2 -2 10000
0 0 1 0' 1 0 00 01
Ifc = 1, R = [0 0] has x1 free; if c = 2, R = [0 ®] has x2 free; R = I if
c01,2.
Special-solutions in N =
[11
(21 .
11/2 -3 r0 010
1/2 + x2 0 + x4 2 I -11
matrix.
33. xc-omplete =
0 1 + x2
0
P-31
I 1 I
0]
; xcomplete =
(c = 1) and N =
(c 1).
(b) Any solution can 3] [y] = [].Then [']] is shorter (length J) than (d) The
be xP. (c)
"homogeneous" solution in the nullspace is x, = 0 when A is invertible.
1 0 0 0
47. R = 0 0 1 0
and x, =
0 1 0 0 -1
1 ; 0 0 1 2: no solution because of
0 000 5
row 3.
13
0000
,-z
OHO OOH
000
440 Solutions to Selected Exercises
11
49. A = 0
solution.
51. A has rank 4 - 1 = 3; the complete solution to Ax = 0 is x = (2, 3, 1, 0).
1 0 -2 0
R= 0 1 -3 0 with -2, -3 in the free column. 0001
53. (a) False. (b) True. (c) True (o nly n columns). (d) True (only m rows).
55. U=
and R =
1 1 1 1 1 0000000
2 ; B can't exist since two equations in three unknowns can't have one 03
0 1 1 1 1 1 1
00 1 1 1 1
0 1 1 0 0 1 1-
0 0 0 0
111
000 0
0000000
57. If column 1 = column 5, then x5 is a free variable. Its special solution is (-1, 0, 0, 0, 1).
59. Column 5 is sure to have no pivot since it is a combination of earlier columns, and x5 is free. With four pivots in the other columns, the special solution is (1, 0, 1, 0, 1). The nullspace contains all multiples of (1, 0, 1, 0, 1) (a line in R5).
1 0 0 -4 61. A= 0 1 0 -3 .
0 0 1 -2
63. This construction is impossible: two pivot columns, two free variables, only three columns.
65. A = [0
67. R is most likely to be I ; R is most likely to be I with fourth row of zeros.
0]'
69. Any zero rows come after these rows: R = [1 -2 -3], R = R=I.
Problem Set 2.3, page 98
[1 0 0] 0 1 0'
11 Cl
1. 0 1 Ill c2 = 0 gives c3 = c2 = cl = 0. But vl + v2 - 4v3 + v4 = 0
001 C3 (dependent).
3. If a = 0 then column 1 = 0; if d = 0 then b (column 1) - a (column 2) = 0; if f = 0 then all columns end in zero (all are perpendicular to (0, 0, 1), all in the xy
plane, must be dependent).
0 0 0 1 0
(R doesn't come from this U).
,W.
."3
123123 5. (a) 3 1 2 0 -5 -7 2 3 1 0 -1 -5
(b) -3
2 -3
123 0 -5 -7 0 0 -18/5
invertible = independent columns (could have used rows).
12-312-31 0
1 2 0 7 -7 ;A [11 = 0 100010
columns add to 0 (could use rows).
7. Thg sum v1 - v2 + v3 = 0 because (w2 - w3) - (w1 - w3) + (wl - w2) = 0-
9. (a) The four vectors are the columns of a 3 by 4 matrix A with at least one free variable, so Ax = 0. (b) Dependent if [vl v2] has rank 0 or 1.
(c) Ovl + c(0, 0, 0) = 0 has a nonzero solution (take any c 0 0).
11. (a) Line in R3. (b) Plane in R3. (c) Plane in R3. (d) All of R3.
13. All dimensions are 2. The row spaces of A and U are the same.
15. v = 1(v + w) + 1(v-- w) and w = 1(v + w) - (v - w). The two pairs span the 222
same space. They are a basis when v and w are independent.
17. If elimination produces one or more zero rows, the rows of A are linearly dependent;
for example in Problem 16
11001100 1010 0-11 0 00110011 01010011
1100 0 -1 1 0 0011 0000
19. The n independent vectors span a space of dimension n. They are a basis for that space. If they are the-columns of A then m is not less than n (m > n).
21. C(U): Any bases for R2; N(U): (row 1 and row 2) or (row 1 and row 1 + row 2).
23. Independent columns rank n. Columns span R’ rank m. Columns are basis for R’ rank = m = n.
25. (a) The only solution is x = 0 because the columns are independent. (b) Ax = b is solvable because the columns span R5.
27. Columns 1 and 2 are bases for the (different) column spaces of A and U; rows 1 and 2 are bases for the (equal) row spaces; (1, -1, 1) is a basis for the (equal) nullspaces.
29. rank(A) = 2 if c = 0 and d = 2; rank(B) = 2 except when c = d or c = -d.
31. Let vl = (1, 0, 0, 0), … , v4 = (0, 0, 0, 1) be the coordinate vectors. If W is the line through (1, 2, 3, 4), none of the v’s are in W.
33. (a) If it were not a basis, we could add more independent vectors, which would exceed the given dimension k. (b) If it were not a basis, we could delete some vectors, leaving less than the given dimension k.
Solutions to Selected Exercises 441
RD.
t!1
–+
‘C7
,.O
.fl
.fl
442 Solutions to Selected Exercises
35. (a) False, might be no solution.
(b) True, 7 vectors in R5 are dependent.
100000000 010001
37. (a) 0 0 0 , 0 1 0 , 0 0 0 . (b) Add 1 0 0 , 0 0 0 ,
000 000 001 000100
00010001000
-1 0 0 0 0 0 , 0 0 1 are a basis for all ro 1 0 0 0 0 -1 0 0 0 -1 0
A = -AT.
39. y (0) = 0 requires A + B + C = 0. One basis is cos x – cos 2x and cos x – cos R. 41. yi(x), y2(x), y3(x) canbex, 2x, 3x (dim 1) orx, 2x, x2 (dim 2) or x, X2, X1 (dim 3).
43.
rl iriiriir 11 ri 1 11 1 It 11+11 I+I
1111 11 11
1
1
Check the (1, 1) entry, then (3, 2), then (3, 3), then (1, 2) to show that those five P’s are independent. Four conditions on the nine entries make all row sums and column sums equal: row sum 1 = row sum 2 = row sum 3 = column sum 1 = column sum
2 (= column sum 3 is automatic because sum of all rows = sum of all columns).
0 1 . (c )
45. If the 5 by 5 matrix [A b] is invertible, b is not a combination of the columns of A. If [A b] is singular, and A has independent columns, b is a combination of those columns.
Problem Set 2.4, page 110
1. False, we only know dimensions are equal. Left nullspace has smaller dim = m – r.
3. C(A): r = 2, (1, 0, 1) (0, 1, 0); N(A): n – r = 2, (2, -1, 1, 0), (-1, 0, 0, 1); C(AT): r = 2, (1, 2, 0, 1), (0, 1, 1, 0); N(AT): m – r = 1, (-1, 0, 1);
C(U): (1, 0, 0), (0, 1, 0); N(U): (2, -1, 1, 0), (-1, 0, 0, 0);
C(UT): (1, 2, 0, 1), (0, 1, 1, 0); N(AT): (0, 0, 1).
5. A times every column of B is zero, so C(B) is contained in the nullspace N(A). 7. From Ax = 0, the row space and the nullspace must be orthogonal. See Chapter 3.
124
9. [1 2 4]; 2 4 8 has the same nullspace.
3 6 12
11. If Ax = 0 has a nonzero solution, then r < n and C(AT) is smaller than R. So AT y = f is not solvable for some f. Example: A = [1 1] and f = (1, 2).
13. d = be/a; the only pivot is a.
15. With independent columns: rank n; nullspace = {0}; row space is R; left inverse.
17. A=[1 1 0];B=[0 0 1].
:-:
'-+
..s
iii
19. No-for example, all invertible n by n matrices have the same four subspaces.
21. (a) 1 0 .
(b) Impossible: dimensions 1 + 1 3. (c) [1 1].
(e) Impossible: Row space = column space requires m = n.
(d) (
93 11.
X101 L0 I J
3
Thenm - r=-ln - r.
23. Invertible A: row space basis = column space basis = (1, 0, 0), (0, 1, 0), (0, 0, 1); nullspace and left nullspace bases are empty. B: row space basis (1, 0, 0, 1, 0, 0), (0, -l, 0, 0, 1, 0), and (0, 0, 1, 0, 0, 1); column space basis (1, 0, 0), (0, 1, 0), (0, 0, 1); nullspace basis (-1, 0, 0, 1, 0, 0), (0, -1, 0, 0, 1 , 0), and (0, 0, -1, 0, 0, 1); left nullspace basis is empty.
25. (a) Same row space and nullspace. Therefore rank (dimension of row space) is the same. (b) Same column space and left nullspace. Same rank (dimension of column space).
27. (a) No solution means that r < m. Always r < n. Can't compare m and n. (b) If m - r > 0, the nullspace of AT- contains a nonzero vector.
29. Row space basis (1, 2, 3, 4), (0, 1, 2, 3), (0, 0, 1, 2); nullspace basis (0, 1, -2, 1); column space basis (1, 0, 0),. (0, 1, 0), (0, 0, 1); left nullspace has empty basis.
31. If Av = O and v is a row of A then v . v = 0. Only v = 0 is in both spaces.
33. Row 3 – 2 (row 2) + row 1 = zero row, so the vectors c(1, -2, 1) are in the left nullspace. The same vectors happen to be in the nullspace.
35. (a) a and w span C(A). (b) v and z span C(AT). (c) rank < 2 if u and w are dependent or v and z are dependent. (d) The rank of uvT + WZT is 2.
37. (a) True (same rank). (b) False (A = [1 0]). (c) False (A can be invertible and also unsymmetric). (d) True.
39. all = 1, a12 = 0, a13 = 1, a22 = 0, a32 = 1, a31 = 0, a23 = 1, a33 = 0, a21 = 1
(not unique).
41. Rank r = n means nullspace = zero vector and x = 0.
Problem Set 2.5, page 122
1 -1 01 1
1.A= 0 1 -1 ; N(A) contains multiples of 1 ; N(AT) contains multiples
1 0-1
3. The entries in each row add to zero. Therefore, any combination will have that
Solutions to Selected Exercises 443
same property: fi + f2 + f3 = 0; ATy = f
-Y2 - Y3 = f3 fi + f2 + f3 = 0. It means that the total current entering from outside is zero.
Yi + Y3 = fi, -Yi + Y2 = f2,
1-3 !-s
C1.
't3
CS' !--`
.-.
O-~ \.p
1N+
+-'
.."
or-
`ti
444 Solutions to Selected Exercises
5.
Cl + C3
-Cl -C3
-Cl
C1 + C2 -C2
-C3
-C2 C2 + C3
Cl + C3 -Cl
-CI Cl + C2
has pivots c1 + c3,
9.
-1 3
-1 -1
-1 -1 -1 3
C1+C2+C5 -Cl
-Cl C1 + C3 + C4 -C2 -C3
-C5 -C4
-C2
-C3
C2 + C3 + C6 -C6
-C5 -C4 -C6
C4 + C5 + C6
CIC3 + C1C2 + C2C3 C1 + C3
7. Conditions on b are b1 + b4 - b5 = 0, b3 - b4 + b6 = 0, b2 - b5 + b6 = 0.
3 -1 -1 -1
-1 -1 3 -1 '
Those c's that connect to node j will appear in row j.
0 0 0 -1
1 0 Y1 ro1
l 2
0 0 -1 0 1
Y2
-4 r 311
010010Y3 54
001 00-1
-1 0 0 0 0 0 XI 01 0000x2
1 0-1000
3
14 ; Y =
3
to 3 14
0
3!3!
15. I think it is already built in.
17. 9 nodes - 12 edges + 4 loops = 1; 7 nodes - 12 edges + 6 loops = 1. 19. x = (1, 1, 1, 1) gives Ax = 0; then ATAx = 0; rank is again n - 1.
13. There are 20 choices of 3 edges out of 6 because "6 choose 3" = choices give triangles, leaving 16 spanning trees.
t 6!
= 20. Four
21. M =
M2 =
0111 1011 1101 1110
and
2
and we get a1kakj = 1 when there is a 2-step path i to k to j.
Notice 3 paths from a node to itself.
3. 11 Ax 112 = 1 always produces an ellipse.
1. Rotation I
0 0 ]' 10 0
3222 2 3 2 2 2 2 3 2 2 2 2 3
Y4
[h] X3 f3
3
Problem Set 2.6, page 133
00
5. They are transformed to (1, 3), (2, 6), (-1, -3). The x-axis turns; vertical lines shift up/down but stay vertical.
,-ti
M-+ F-+
Second 7. derivative
matrix
0020 0006 0000 0000
. The nulispace is spanned by (1, 0, 0, 0) and
(0, 1, 0, 0), which gives linear P1. Second derivatives of linear functions are zero. The column space is accidentally the same as the nullspace, because second deriva- tives of cubics are linear.
9. e' and e_' are a basis for the solutions of u" = is.
cos 9 11.
I sin 8 13. (a) Yes.
-sin d cos B]
cos d s1n.9
-sin 9_ 1 0 cos B] = [0 11
2
so H = I.
15. A =
17. A =
I -1 0 0 0 0010 0100 0001
0 0 0-'I 1 0 01i
and A2 = I; the double trans pos e of a matrix gives
the matri x it self.
No te A23 = 1 because transp ose of matrix 2 is matri x 3.
0 0 0 0
0100 100
[
(b) Yes. We don't need parentheses (AB)C or A(BC) for ABC!
0 0 1 0 ;AB = 0 01 0 1
0100
;BA= 0
; B=
010 0010
10
001 0001
19. (a) is invertible with T-1(y) = y1/3; (c) is invertible with T-1(y) = y - 11.
21. With w = 0, linearity gives T (v + 0) = T (v) + T (0). Thus T (0) = 0. With c = -1, linearity gives T (-O) = -T(O). Certainly T (-O) = T (O). Thus T (O) = 0.
23. S(T(v)) = S(v) = v.
25. (b) and (c) are linear, (a) fails T (2v) = 2T(v), (d) fails T (v + w) = T (v) + T (w).
27. T(T(v)) = (v3, v1, v2); T3(v) = v; Tloo(v) = T(T99(v)) = T(v).
29. (a) T(1, 0) = 0. (b) (0, 0, 1) is not in the range. (c) T(0, 1) = 0.
31. Associative law gives A(M1 + M2) = AM1 + AM2. Distributive law over c's gives A(cM) = c(AM).
33. No matrix A gives A [0 0]. To professors: The matrix space has 0] [0
dimension 4. Linear transformations on that space must come from 4 by 4 ma- trices (16 parameters). Those multiplications by A in Problems 31 and 32 were special transformations with only 4 parameters.
35. T (I) = 0 but M = [0
[c2
Ir
0]
u (b) N = [c d] (c) ad = bc.
= T (M); these fill the range. M =
] in the kernel.
37. (a) M =
39. Reorder basis by permutation matrix; change lengths by positive diagonal matrix.
1 a a2 A 4
41. 1 b b2 B = 5 ; Vandermonde determinant = (b - a) (c - a) (c - b); the 1 c c2 C 6
points a, b, c must be different, and then determinant 0 0 (interpolation is possible).
Solutions to Selected Exercises 445
0 0 1
M-+
i-+
00o
ago
43. If T is not invertible, then T (vi), ..., T(v,) will not be a basis. Then we couldn't choose wi = T (vi) as output basis.
45. S(T(v)) = (-1, 2) but S(v) _ (-2, 1) and T(S(v)) = (1, -2). So TS 0 ST.
47. The Hadamard matrix H has orthogonal columns of length 2. So the inverse of H is HT/4 = H/4.
49. False: the n nonzero vectors would have to be independent.
Problem et 3.1, page 148
1. IlxII = 21; IlyII = 3 lf2-; XT y = 0.
3. (x2/x1)(y2/yl) = -1 means thatx1y1 +x2y2 = 0, soxTy = 0.
5. v1 and v3 are orthogonal, also v2 and v3.
7. x = (-2, 1, 0); y = (-1, -1, 1); the row z = (1, 2, 1) is orthogonal to the nullspace. 9. The orthogonal complement is the line through (-1, -1, 1) and (0, 0, 0).
11. If ATy = 0, then yTb = yTAx = (yTA)x = 0, which contradicts yTb 0 0. 13. The figure splits any y in R' into column space part + left nullspace part. 15. No such matrix, because (1, 2, 1)T(1, -2,1)00.
17. The matrix with the basis for V as its rows. Then the nullspace is V1 = W. 19. (a) If V and W are lines in R3, V1 and Wj- are intersecting planes. (b) V.
21. (1, 2, -1) is perpendicular to P. A = has row space = P.
113 1
2 10
has N(A) = P; B = [1 2 -1)
23. A = [2 21 has subspaces = four lines; (1, 1) orthogonal to (-1, 1), (1, 2) orthogonal to (-2, 1). Always row space 1 nullspace.
25. (a)
12-32 11
2 -3 1 . (b) -3 is not orthogonal to 1 . (c) 1 in C(A) -35-25 1[I]
1
r
_1
and 0 in N(AT) is impossible: not perpendicular. (d) A = I i -iJ has 0L
A2 = 0. (e) (1, 1, 1) will be in the nullspace and row space; no such matrix.
27. (a) If Ax = b has a solution and ATy = 0, then bTy = (Ax)Ty = 0.
(b) b is not in the column space; so, not perpendicular to all y in the left nullspace.
29. x = xr + xn, where xr is in the row space and xn is in the nullspace. Then Ax, = 0 and Ax = Axr + Axn = Axr. All vectors Ax are combinations of the columns of A. If x = (1, 0), then xr = (1/2, 1/2).
31. (a) For a symmetric matrix, the column space and row space are the same.
(b) x is in the nullspace and z is in the column space = row space: so these
"eigenvectors" have xTZ = 0.
33. x splits into xr + xn = (1, -1) + (1, 1) _ (2, 0).
(ti
°I'
35. Ax = Bz means that [ A B ] [-x ] = 0. Three homogeneous equations in four unknowns always have a nonzero solution. Here x = (3, 1) and x = (1, 0), and Ax = Bz = (5, 6, 5) is in both column spaces. Two planes in R3 (through zero) must intersect in a line at least!
37. ATy = 0 gives (Ax)Ty = xTATy = 0. Then y L Ax and N(AT) L C(A).
39. Sl is the nullspace of A = [1 1 . Therefore S1 is a subspace even if S is not. 2 2 2]
41. If V '1s all of R4, then V1 contains only the zero vector. Then (V ')-L = R4 = V. 43. (1, 1, 1, 1) is a basis for P'. A = [1 1 1 1] has the plane P as its nullspace.
45. Column 1 of A-1 is orthogonal to the space spanned by the 2nd, ... , nth rows of A.
2 2 -1 47. A= -1 2 2 ,
2 -1 2
ATA = 91 is diagonal: (ATA)ij _ (column i of A) (column j).
49. (a) (1, -1, 0) is in both planes. Normal vectors are perpendicular, planes still intersect! (b) Need three orthogonal vectors to span the whole orthogonal com- plement in R5. (c) Lines can meet without being orthogonal.
51. When AB = 0, the column space of B is contained in the nullspace of A. Therefore the dimension of C(B) < dimension of N (A). This means rank(B) < 4 - rank(A).
Problem Set 3.2, page 157
1. (a) (x + y)/2 > xy (arithmetic mean > geometric mean of x and y).
(b) I1x+yi12 < (IIxII+IIYII)2means that (x+y)T(x+y) < IIxI12+211xiIIIYII+11Y112. The left-hand side is xTx + 2xT y + yTy After cancelling this is xTy < 11411Y11.
3. p = (10/3, 10/3, 10/3); (5/9, 10/9, 10/9).
rll
5. cos 0 = 1// so 9 = arccos(1//); P = [1/n
1 1/n]=all entries -.
7. Choose b = (1, ... , 1); equality if a1 =
1
= a, (then a is parallel to b).
n
1
9. P
2 = aaTaaT a(aTa)aT
aaT
P. aTa -
11. (a) P =
Pl P2 = [00
10 10 ];P2=I_Pa = 39
o 1o
1o .
(b) P,+P2 = [
aTaaTa = (aTa)(aTa)
1 3 9 _3
i
0J. The sum of the projections onto two perpendicular lines gives the
1o o
vector itself. The projection onto one line and then a perpendicular line gives the zero vector.
13. Trace =alai +
aTa 7a- aTa
T + =a a
0I; LI
448 Solutions to Selected Exercises
15. II Ax 112 = (Ax)T(Ax) = xATAx, IIATXI12 = (ATx)T(ATx) = xAATx. If ATA = AAT, then II Ax II = II ATx II . (These matrices are called normal.)
17. (a) aTb/aTa = 5/3; p = (5/3, 5/3, 5/3); e = (-2/3, 1/3, 1/3) has eTa = 0. (b) aTb/aTa=-l;p=(1,3,1)=bande=(0,0,0).
1
19. P1=3 = Pi and Pl b =
P2b =
51
51 and
1 1 -2 -2 1 4 4 -2 21. P1=- -2 4 4 ,P2=- 4 4 -2 9 -2 4 4 9 -2 -2 1
P1 P2 = zero matrix because a1 1 a2. 23. P1+P2+P3
3
1 -2 -2 4 4 -2 4 -2 4 111
-244 44-2+9-2
9 -2 4 4 9 -2 -2
25. Since A is invertible, P = A(ATA)-'AT = AA-1(AT)-'AT = I: project onto all
of R2.
Problem Set 3.3, page 170
1. x = 2; E2 = (10 - 3x)2 + (5 - 4x)2 is minimized; (4, -3)T(3, 4) = 0.
2 r3 3
3. z= 31;b-p= is perpendicular to both columns. I3
L L3J L -fl
5. b = 4, 5, 9 at t = -1, 0, 1; the best line is 6 + (5/2)t; p = (7/2, 6, 17/2).
1 1/2 0
7. P = A(ATA)-IAT = 1/2 1/2 -1/2 .
0 -1/2 1
9. (a) pT = (pTp)T = P. Then P = pTp = P2. (b) P projects onto the space Z = {0}.
11. P + Q = I, PQ = 0, transpose to QP = 0, so (P - Q)(P - Q) = P - 0 - 0 + Q=I.
13. Best line 61/35 - (36/35)t; p = (133/35, 95/35, 61/35, -11/35) from C + Dt. 15. H2 = (I - 2P)2 = I - 4P + 4P2 = I - 4P + 4P = I. Two reflections give I.
17. Projection onto x + y = 0 = Projection onto (-1, 1) = [12 -1/2
1/2 ]'
19. Projection matrix onto row space would be AT(AAT)-1 A if the rows were indepen- dent.
-2 =I. 1 4-24
1
Nod
AIM
y>,
WIN WIN
,4.
J–
wow
21. Best line is 23. C =
t-== ATb = ATA
1 -1 1
[x1 alTb
;x =
31.
33.
1 bio + 10x 10
10 (bt + + bio).
35. Closest parabola:
D=
39. xW
wlbl+…+wnybm W12+ +wm
ATAx =
4 8
8 26
26 92
139 8
E
1416 20 26 C 36
92 D = 112 .
338 E 400
(ATA)-IATb, AT
25. A= 1 0 01
124 _-5-
C
;x= D ;b= -0
X2
-a2 b
1]
= [1 … 1], b = (Yi, … , Ym)T then Yl+…+Y.
m
2
E
1111 P=3 1 1 1 .
111
29. (x – x)(x – x)T = (ATA)-‘AT[(b – Ax)(b – Ax)T]A(ATA)-i. For indepen- dent errors, substituting (b – Ax)(b Ax)T = 621 gives the covariance ma-_ trix (ATA)-1AT62A(ATA)-1. This simplifies to 62(ATA)-‘: neat formula for the covariance matrix.
(b) The variance is 11 e 112 = 7 1(bi -x)2. (c) p = (3, 3, 3), e = (-2, -1, 3), pTe = 0.
27. (a) aTa = m, aTb = bl + + b,,,. Therefore z is the mean of the b’s.
1 1…
0 it 8
8 13
solves Ax p.
. Change right side to p =
20
100 0
C
L17
1118
37. (a) The best line is x = 1 + 4t, which goes through the center point (, b) _ (2, 9). (b) From the first equation: Cm + D E t; _ bi. Divide by m to get C + Dt = b.
41. xw = (1/21, 4/7); Axw = (1/21, 13/21, 25/21),
b – Axw = (-1/21, 8/21, -4/21), (Axw)WTW(b – Axw) = 0.
Problem Set 3.4, page 185
1. (a) -4 = C – 2D, -3 = C – D, -1 = C + D, 0 = C + 2D.
-2 + t goes through all 4 points; E2 = 0. (c) b is in the column space.
Solutions to Selected Exercises
449
14]
(b) Best line
own
`”
Car
r-1 r-1
N.-+
its ‘°’
450 Solutions to Selected Exercises
3. Projection on a3: (-2/3, 1 /3, -2/3); the sum is b itself; notice that alai , a2az , a3a3 are projections onto three orthogonal directions. Their sum is projection onto the whole space and should be the identity.
5. (I -2uuT)T (I -2uuT) = I -4uuT +4uuTUUT = 1; Q =
I 12 122 -12 2 7. (xigl + … + xngn)T(xigi + … + xgn) = xi + … +.xn 11b112 = bTb =
xI + xn
9. The combination closest to q3 is Oql + Oq2.
11. Q is upper triangular: column 1 has qll = ±1; by orthogonality column 2 must be (0, +1, 0, …); by orthogonality column 3 is (0, 0, +1, …); and so on.
b 0 1 r0 0 1 13. A = 0 1 1 0 1 0 111100
= QR.
is in the left nullspace;
15. ql =
q2 =
2/3 -2/3 1/3 , q3 = 2/3 2/3 1/3
x= qITbJ
q b] – [2
a
17. Rx = QTb gives
0
[5/9] 2] [ x I _ [503] and 0
2/3
19. C* – (q22 C*) q2 is c – (qi c) qi – (qz c) q2 because q22T ql = 0.
By orthogonality, the closest functions are 0 sin 2x = 0 and 0 + Ox = 0.
—P
23. ao = 1/2, ar = 0, bi = 2/n.
25. The closest line is y = 1/3 (horizontal since (x, x2) = 0).
27. (1//, -1/ /, 0, 0), (1/ / ,1/ /, 2/./, 0), (-1/2/, -1/2J, 1/2J, -1//).
29. A = a = (1, -1, 0, 0); B = b-p = (2, 2,-1,0);C=c-pA-pa= 3, 3, 3 Notice the pattern in those orthogonal vectors A, B, C. Next, (1, 1, 1, 1)/4.
31. (a) True. (b) True. Qx = xigi + x282. 11 Qx 112 = xi + x2 because qi q2 = 0.
Problem Set 3.5, page 196
4 0 0 0 16 0 0 0
222
1I
20004 01600_2 1. F- 4 4I’
0 0 4 0′ F 0 0 1 6 0 0 4 0 0 0 0 0 16
3. The submatrix is F3.
5. e`x = -1 for x = (2k + 1)n, e`9 = i for 0 = 2kn + n/2, k is integer.
–SIN -IN
–SN
,-y
-ti
r– (,j
‘.y
\\\
-..y
Mr)
Sri
r\+
OEM
INN
…
r-0,000
`”
r-+
7. c = (1, 0, 1, 0).
9. (a) y = F times (1, 0, 0, 0) = column zero of F (b) c = (1, 1, 1, 1)/4.
11. C =
Ceven = Codd =
2 0
Y” _ 0 0
—). Y=
13. co = (fo+ft+f2+f3)/4,ci = (fo-i.fi-f2+if3)/4,c2 = (fo-fl+f2-A) /4, C3 = (fo +if, – f2 – if 3)/4; f odd,means fo = 0, f2 = 0, f3 = -fl. Then co = 0, C2 = 0, C3 = -c1, so c is also odd./
15. F-1 =
F1
17. D = I
2 11121
_ _ FH. -1 4
1 rl 1 1 111111
e27ri/6
and F3 = ( 1 1
010
e2rri/3 e4ni/3 e4ni/3 e2iri/3
and PT lead to 0 – 1 = 0.
19. A = diag(l, i, i2, i3); P = 0 0 1 1 0 0-
1 1i2-ii
e47ri/6
I
21. Eigenvalues eo = 2 – 1 – 1 = 0, e1 = 2 – i – i3 = 2, e2 = 2 – (-1) e3=2-i3-i9=2. Check trace 0+2+4+2=8.
– (-1) = 4,
23. The four components are (co + c2) + (Cl + c3); then (co – c2) + i (Cl – c3); then (co + c2) – (Cl + c3); then (co – C2) – *1 – C3). These steps are the FFT!
Problem Set 4.2, page 206
1. det (2A) = 8 and det (-A) _ (-1)4 det A = z and det (A2) = a and det (A-1) = 2.
3. The row operations leave det A unchanged by Rule 5. Then multiplying a row by -1 (Rule 3) gives the row exchange rule: det B = -det A.
5. For the first matrix, two row exchanges will produce the identity matrix. The second matrix needs three row exchanges to reach I.
7. det A = 0 (singular); det U = 16; det UT = 16; det U-1 = 1/16; det M = 16 (2 exchanges).
9. The new determinant is (1 – mi) (ad – bc).
11. If I det Q I is not 1 then det Q’ = (det Q)” would blow up or approach zero. But Ql remains an orthogonal matrix. So det Q must be 1 or -1.
13. (a) Rule 3 (factoring -1 from each row) gives det(KT) = (-1)3 detK. Then -det K = det KT = det K gives det K = 0.
Solutions to Selected Exercises 451
a-+ (11
r^4 r-+
!”d
‘^i
…,
452 Solutions to Selected Exercises
0001 0010
(b) 0 -1 0 0 has det = 1. -1 0 0 0
15. Adding every column of A to the first column makes it a zero column, so det A = 0. If every row of A adds to 1, then every row of A – I adds to 0 and det (A – I) = 0.
rl 1i
22 11 22
But det A need not be 1: A =
has det (A – I) = 0, but det A = 0 1.
17. det (A) = 10, det (A-1) = io, det (A – Al) = 2 – 7A + 10 = 0 fork = 5 and A= 2.
19. Taking determinants gives (det C) (det D) = (- 1)n (det D) (det C). For n even the reasoning fails (because (-1)n = +1) and the conclusion is wrong.
21. det (A-‘) = det
r d -b1
ad – be ad – be -c a
ad – be (ad – bc)2
1
ad –
Lad – be ad – bcJ 23. Determinant = 36 and determinant = 5.
25. det(L) = 1, det(U) = -6, det(A) = -6, det(U-1L^1) _ -6, and det (U 1L-‘A) = 1.
27. Row 3 – row 2 = row 2 – row 1 so A is singular.
29. A is rectangular so det (ATA) 0 (det AT) (det A): these are not defined.
31. The Hilbert determinants are 1, 8 x 10-2, 4.6 x 10-4, 1.6 x 10-7, 3.7 x 10-12, 5.4 x 10-18, 4.8 x 10-25, 2.7 x 10-33, 9.7 x 10-43 2.2 x 10-53. Pivots are ratios of determinants, so the tenth pivot is near 10-53/1043 = 10-10: very small.
33. The largest determinants of 0-1 matrices f o r n = 1 , 2, … , are 1, 1, 2, 3, 5, 9, 32, 56, 144, 320, on the web at wwwmathworld.wolfram.com/HadamardsMaximum DeterminantProblem.html and also in the “On-Line Encyclopedia of Integer Sequences”: www. research. att. com. With -l s and l s, the largest 4 by 4 determinant (see Hadamard in index) is 16.
35. det (I + M) = 1 + a + b + c + d. Subtract row 4 from rows 1, 2, and 3. Then subtract a (row 1) + b (row 2) + c (row 3) from row 4. This leaves a triangular matrix with 1, 1, 1, and 1 + a + b + c + d on its diagonal.
Problem Set 4.3, page 215
1. (a) a12a21a34a43 = 1; even, so detA = 1.
(b) b13b22b31b14 = 18; odd, so detB = -18.
3. (a) True (product rule). (b) False (all ls). (c) False (det [1 1 0; 0 1 1; 1 0 1] = 2).
5. The 1, 1 cofactor is Fn_1. The 1, 2 cofactor has a 1 in column 1, with cofactor Fn_2. Multiply by (_ 1) 1+2 and also -1 to find Fn = Fn_1 + Fn-2- So the determinants are Fibonacci numbers, except Fn is the usual Fi_1.
‘T)
1-+
poi
7. Cofactor expansion: det = 4(3) – 4(1) + 4(-4) – 4(1) _ -12.
9. (a) (n – 1)n! (each term n – 1).
(b)
1 +
1
2!
+
+ / W. (n 1 1)!1
11.
det
0A BIB -BI
1
(c) 3 (n3 + 2n – 3).
[AB
L0
Al
I
= det(AB). Test A = [1
2], B =
0 A
2 -BI==
11
= 0 = det (AB). Singular:
det (AB); A = [], B = [1
rank(AB) < rank(A) < n < m.
2], det I -B
]
13. det A = 1 + 18 + 12 - 9 - 4 - 6 = 12, so rows are independent; det B = 0, so rows are dependent (row 1 + row 2 = row 3); det C = -1, C has independent rows.
15. Each of the six terms in det A is zero; the rank is at most 2; column 2 has no pivot.
17. a,1a23a32a44 has -, a,4a23a32a4, has +, so det A = 0; detB=2.4.4.2- 1 4.4.1=48.
19. (a) If all = a22 = a33 = 0 then four terms are sure zeros. (b) Fifteen terms are zero.
21. Some term aiaa2p a, in the big formula is not zero! Move rows 1, 2, ... , n into rows a, ,B, ..., w. Then these nonzero a's will be on the main diagonal.
23. 4! /2 = 12 even permutations; det (I + Peven) = 16 or 4 or 0 (16 comes from I + I).
3 2 1 0 07
25. C = 2 4 2 and ACT = ro 4 01 = 41. Therefore A-' = 4 CT
123 04
27. Bnl=lfll - An1=(n+1)-n=1.
29. We must choose is from columns 2 and 1, columns 4 and 3, and so on. Therefore n
must be even to have det An 0. The number of exchanges is Zn so G = (_1)n/2.
31. S1 = 3, S2 = 8, S3, = 21. The rule looks like every second number in Fibonacci's sequence ... , 3, 5, 8, 13, 21, 34, 55, ... so the guess is S4 = 55. The five nonzero terms in the big formula for S4 are (with 3s where Problem 39 has 2s) 81 + 1 - 9 - 9-9=55.
33. Changing 3 to 2 in the corner reduces the determinant F2n+2 by 1 times the cofactor of that corner entry. This cofactor is the determinant of Sn_1 (one size smaller), which is Fen. Therefore changing 3 to 2 changes the determinant to F2n+2 - F2, which is Fen+,
35. (a) Every det L = 1; det Uk = det Ak = 2, 6, -6 for k = 1, 2, 3. (b) Pivots 5, s , 6*
37. The six terms are correct. Row 1 - 2 row 2 + row 3 = 0, so the matrix is singular. 39. The five nonzero terms in det A = 5 are
(2)(2)(2)(2) + (-1)(-1)(2)(2) - (2)(2)(-1)(-1) (2)(-1)(-1)(2).
C
Solutions to Selected Exercises 453
= 1 r det
I
, det
5
r-;
`-'
w00
CAD
h-"
...
f-+
r_,
454
Solutions to Selected Exercises
41. With all = 1, the -1, 2, -1 matrix has det= 1 and inverse (A-')Z1 = n + 1 - max(i, j).
43. Subtracting 1 from the n, n entry subtracts its cofactor C,,, from the determinant. That cofactor is C,,,,, = 1 (smaller Pascal matrix). Subtracting 1 from 1 leaves 0.
Problem Set 4.4, page 225
20 -10 -12
1. det A= 20; CT = 0 5 0 ACT = 201; A-1 =
1 20
20 -10
121
0 I
0 5 00400
3. (x, y) _ (d/(ad - bc), -c/(ad - bc)); (x, y, z) _ (3, -1, -2). 1-2
5. (a) The area of that parallelogram is det
4 = 2. (b) The triangle A'B'C' has the same area; it is just moved to the origin.
7. The pivots of A are 2, 3, 6 from determinants 2, 6, 36; the pivots of B are 2, 3, 0. 9. (a) P2 takes (1, 2, 3, 4, 5) to (3, 2, 5, 4, 1).
(b) P-' takes (1, 2, 3, 4, 5) to (3, 4, 5, 2, 1).
11. The powers of P are all permutation matrices, so eventually one of those matrices must be repeated. If P' is the same as PS, then Pr-s = I.
13. (a) det A = 3, det B1 = -6, det B2 = 3, so x1 = -6/3 = -2 and x2 = 3/3 = 1. (b) I A I = 4, I B, I = 3, 1 B21 = -2, I B3I = 1. So x1 = 4 and x2 = -1 and x3 = a .
15. (a) x1 = o and x2 = o : no solution (b) x1 = o and x2 =
: undetermined.
o
17. If the first column in A is also the right-hand side b then det A = det B1. Both B2
and B3 are singular, since a column is repeated. Therefore x, = I B11 / I Al = 1 and
x2=x3=0.
19. If all cofactors = 0 (even in a single row or column), then det A = 0 (no inverse).
A = 1 1] has no zero cofactors but it is not invertible.
A-1
21. If det A = 1 and we know the cofactors, then CT =
Since A is the inverse of A-1, A must be the cofactor matrix for C.
23. Knowing C, Problem 22 gives det A = (det C) ---7' with n = 4. So we can construct A-1 = CT/ det A using the known cofactors. Invert to find A.
25. (a) Cofactors C21 = C31 = C32 = 0
(b) C12 = C21, C31 = C13, C32 = C23 make S-1 symmetric.
27. (a) Area
29. (a) Area 1 2
32 14
= 10. (b) Triangle area = 5. (c) Triangle area = 5.
211 211
3 4 1 = 5, (b) 5 + new triangle area i 0 5 1 =5+7=12.
051
-1 0 1
31. The edges of the hypercube have length 1 + 1 + 1 + 1 = 2. The volume det H is 24 = 16. (H/2 has orthonormal columns. Then det(H/2) = 1 leads again to det H = 16.)
3], so the triangle ABC has area
and also det A-1 = 1.
I-+
I-+
IT-
Nom.
0-0
!-I
.-'I r-1
(+')
"r2
x°' 4-1
III
`-+
x2 x1
So triangles a + b + d =
(xl y2 - x2yi)
2
Check an example with (a, b) = (3, 2), (c, d) _ (1, 4), and area = 10. The line from (0, e) in step 3 has slope c/a and equation y = e+cx/a. Step 3 works because
(b, d) is on that line! d = e + cb/a is true, since ad - be = area ae at step 2.
35. The n-dimensional cube has 2n corners, n2n-1 edges, and 2n faces of dimension n - 1. The cube whose edges are the rows of 21 has volume 2n.
37'. J = r. The columns are orthogonal and their lengths are 1 and r. 8r/ax ar/ay cos 0 sin 6
39.
41. S = (2, 1, -1) gives a parallelogram, whose area is the length of a cross product: II P Q x PS 11 = 11 (-2, -2, -1) 11 = 3. This comes also from a determinant ! The other four corners could be (0, 0, 0), (0, 0, 2), (1, 2, 2), (1, 1, 0). The volume of the
tilted box is IdetI = 1.
xyz
43. det 3 2 1 = 0 = 7x - 5y + z; plane contains the two vectors. 123
45. VISA has the five reversals VI, VS, VA, IA, SA. And AVIS has the two reversals VI and VS. Since 5 - 2 is odd, VISA and AVIS have opposite parity.
Problem Set 5.1, page 240
1. ) = 2 and) = 3, trace = 5, determinant = 6.
3. ) = -5 and ) = -4; both )'s are reduced by 7, with unchanged eigenvectors.
5. ) = 3,) = 1,) = 0, with eigenvectors (1, 0, 0), (2, -1, 0), (3, -2, 1); trace = 4, det = 0. ) = 2, = 2, ) _ -2, with eigenvectors (1, 1, 1), (0, 1, 0), (1, 0, -1); trace = 2, det = -8.
7. Ax = )x gives (A - 7I)x = () - 7)x; Ax = )x gives x = )A-lx, so A-1x = (1/))x.
9. The coefficient of (-))n-1 in (X 1 - )) . . . On - )) is )1 + +),. In det (A -XI), a term that includes an off-diagonal a;j excludes both at; - ) and ajj - A. Such a term doesn't involve So the coefficient of in det (A - Al) must come from the product down the main diagonal. That coefficient is all + +ann = )1
11. Transpose A - Al: det (A - Al) = det (A - )I)T = det (AT - Al).
13. The eigenvalues of A are 1, 2, 3, 7, 8, 9.
15. rank(A) = 1, ) = 0, ... , 0, n (trace n); rank(C) = 2, ) = 0, ... , n/2, -n/2 (trace 0).
80/ax 86/ay (-sing)/r (cos0)/r r
Solutions to Selected Exercises 455
x1y2-x2Y1=rectangles A+B+D (not C).
Areas from same bases same heights
rectangle A = 2(triangle a) rectangle B = 2(triangle b) rectangle D = 2(triangle d).
r-+
456 Solutions to Selected Exercises
17. The third row contains 6, 5, 4.
19. A and A2 and A°° all have the same eigenvectors. The eigenvalues are 1 and 0.5 for A, 1 and 0.25 for A2, 1 and 0 for A°°. Therefore A2 is halfway between A and A°O.
21. Al = 4 and A2 = -1 (check trace and determinant) with x1 = (1, 2) and x2 = (2, -1). A-1 has the same eigenvectors as A, with eigenvalues 1/A1 = 1/4 and 1/72 = -I-
23. (a) Multiply Ax to see Ax which reveals X. (b) Solve (A - 7,I )x = 0 to find x.
25. (a) Pu = (uuT)u u(uTU) = u, so ;, = 1. (b) Pv = (uuT)v = u(uTV) = 0, so A = 0. (c) x1 = (-1, 1, 0, 0), x2 = (-3, 0, 1, 0), x3 = (-5, 0, 0, 1) are orthogonal to u, so they are eigenvectors of P with A = 0.
27. A3 - 1 = 0 gives k = 1 and A = 2(-1 ± i,13); the three eigenvalues are 1, 1, -1. 29. (a) rank = 2. (b) det (BT B) = 0. Not (c). (d) (B + I)-' has (.l + 1)-i =
11 1,
31. a = 0, b=9, c = 0 multiply 1, A, a,2 in det (A - XI) = 9A - .13: A = companion (m" matrix.
33. I 0 0], [0 0]' HLamilton).
35. Ax = c,Alxl + A=B.
[-1 1]' Always A2 = zero matrix if A = 0, 0 (Cayley- + cnAnxn equals Bx = c1Atx1 + + cnAnxn for all x. So
37'
Ia
[a+b]
r
= (a + b) 11]; ;12 = d - b to produce trace = a + d.
d] [1] =
39. We need A3 = 1 but not A = 1 (to avoid I). With Al = e2n`13 and A.2 = e-2"`"3, the determinant will be A,A2 = 1 and the trace is ;1 +A2 = cos 23 +i sin 3 +cos 23 -
i sin 3 = -1. One matrix with this trace -1 and determinant 1 is A = [_i
Problem Set 5.2, page 250
[20 1
1.
o ].
1] [1 -1]
[0 0] [0 -2] [0 0] [0
0] [1 -11 -1'
20 _1
3. a, = 0, 0, 3; the third column of S is a multiple of
1 and the other columns are on the plane orthogonal to it. 1
5. Al and A3 cannot be diagonalized. They have only one line of eigenvectors. 1
7. A = [31
-1]
1 [3,5100+1
[0 5
[1
3.5100-3]
11
givesAt0o = [1
1
-1] [500
[3 1 -1]
4
5100-1
5100+3
Ol]
Ol]
."Y
ti.
,-.
acs
Q',
p''
COD
ANY
.-1
-CS
,--+
``'
t-: V7,
Sri
AIM
NAM .moo
Et,
9. trace(AB) = trace(BA) = aq +bs +cr +dt. Then trace(AB - BA) = 0 (always). So AB - BA = I is impossible for matrices, since I does not have trace zero.
11. (a) True; det A = 2 0 0. diagonal!
(b) False;
l11 100
0 1 1 (c) False; 0 1 0 is 002 002
13. A =
15.
[0 3]
17. A = [0
-1] [0
[0 1
][2 0
0]
-1]
1
'
four square roots. [1 2]'
33. Bk
1 3 0 k 1 1 2k]
[1
[1
1 5[0
1 ]
19. (a) False: don't know A's. of S!
(b) True. (c) True. (d) False: need eigenvectors
[0
- [ 2] [0
3]
[0 -1] [0 2] [0 -1]
130k 3k2-k
33] [0
1]' [2 2]
23]. - 0 5'
1
21. The columns of S are multiples of (2, 1) and (0, 1) in either order. Same for A-'.
23. A and B have a,I = 1 and A2 = 1. A -I- B has Al = 1, A2 = 3. Eigenvalues of A + B are not equal to eigenvalues of A plus eigenvalues of B.
25. (a) True. (b) False. (c) False (A might have 2 or 3 independent eigen- vectors).
27. A = [ 3 2] (or other), A = [ 4 4], A 10 5]; only eigenvectors are (c, -c).
29. SAkS-1 approaches zero if and only if every IAI < 1; Bk .-* 0 from k = .9 and A, = .3.
31. A = .9 0 S- 3 -3; B10 _ (.9) Blo _ (.3)10 [ 1], 0 .3 ' [1 1]' 10 [1J [ 1]
B 10 [0J = sum of those two.
35. trace AB = (aq + bs) + (cr + dt) = (qa + rc) + (sb + td) = trace BA. Proof for diagonalizable case: the trace of SAS-1 is the trace of (AS-1)S = A, which is the sum of the A's.
37. The A's form a subspace, since cA and Al + A2 have the same S. When S = I, the A's give the subspace of diagonal matrices. Dimension 4.
39. Two problems: The nullspace and column space can overlap, so x could be in both. There may not be r independent eigenvectors in the column space.
41. A = [1 0] has A2 = Hamilton.
[1
1], and A2 - A - I = zero matrix confirms Cayley-
Solutions to Selected Exercises 457
CDR
Woo
r-+
t1'
458 Solutions to Selected Exercises
43. By 5F, B has the same eigenvectors (1, 0) and (0, 1) as A, so B is also diagonal. The
b0
equations AB - BA = [2c 2d] - [ac 2d] = [0 01 are -b = 0 and c = 0:
rank 2.
45. A has Al = 1 and A2 = .4 with x1 = (1, 2) and x2 = (1, -1). A°° has Al = 1 and A2 = 0 (same eigenvectors). A100 has Al = 1 and A2 = (.4)100, which is near zero. So A1oo is very near A°°.
Problem Set 5.3, page 262
1. The Fibonacci numbers start even, odd, odd. Then odd + odd = even. The next two are odd (from odd + even and even + odd). Then repeat odd + odd = even.
3. A2 = 112 2] 1], A3 = [2
A4
=[I3
2]; F20 = 6765.
5. A =
SA5-1 1 01
SAks-1 - 1
1 [ 0
X2
] [-1 A1] [0] [(Ai A2)/()'1 A2)] ---
0]
X1 A2] X1 - X2 [ 1 1
a,l] (notice S-1). 0 1 -A2 1 - - - - - -
= 11
X1
2 [ 11 121 [ 0 ] [-1
7. Direct addition Lk + Lk+1 gives L0, ..., L10 as 2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123. My calculator gives ),1° = (1.618...)10 = 122.991..., which rounds off to
L10 = 123.
9. The Markov transition matrix is
11. (a) A = 0, (1, 1, -2). forA=1.
13. (a) 0 1/2, and stable for jai < 1/2. Neutral for a = ±1/2.
NIA
.-a
0 0 2 1 1 2- 19. A2= 0 0 0 and A3=0.So(I-A)-1=I+A+A2= 0 1 1 000 001
21. If A is increased, then more goods are consumed in production and the expansion must be slower. Mathematically, Ax > tx remains true if A is increased; t,,a,, goes up.
23.32 [2 3]
25. R =
its trace is not real. Note that [ can have = i and -i, and real square
01 root 1-1 0]’
27. A = SA1S-1 and B = SA2S-1. Diagonal matrices always give A1A2 = A2A1. Then AB = BA, from SA1S-1SA2S-1 SA1A2S-1 = SA2A1S-1 = SA2S-1SA1S-1 = BA.
29. BhasA=iand-i,soB4hasA4=1and1;ChasA=(1± -,/3i)/2 =exp(+7ri/3),
= 2 [1 SOS-1 =
5 1[11
1] [0 Ol] [-1 1] and Ak = 2 1]
[510 0
11] [-1 11
so A3=-1 and -1. Then C3 = -I
Problem Set 5.4, page 275
andC1024=-C.
[i 2] has R2 = A. /B would have A =
and A = , so
1. Al = -2 and A2 = 0;x1 = (1, -1) and x2 = (1, 1);
eAt _ 1 [ 2
[e2,_
3. u(t)
5. (a) eA(t+T) = SeA(t+T)S-1 = Se AteATS-1 = Se
e-21 + 1 -e-2t + 1] _e-2t + 1 e-2t + 1
+2; as t -+ 00,e2t ->+00.
(b) eA = I + A = [1 ],eB = I + B = [ 1 ], A + B.= [0 -011
gives
cA+B = [cos 1 -sin 1] from Example 3 in the text, at t = 1. This matrix is sin 1 cos 1
different from eAeB.
7. eAt = I + At = [1 1]; eAtu(0) _
[4t 4+ 3]
9. (a) Al = +2 57 AZ = 2-z 37 Re Al > 0, unstable.
(b) Al = -N/7, A2 = Real > 0, unstable
-‘V 7,
Re Al > 0, unstable (c) Al = -1 ts, A2 = -1 (d) Al = 0, A2 = -2, neutrally stabale.
ts,
11. Al is unstable fort < 1, neutrally stable fort > 1. A2 is unstable fort < 4, neutrally stable at t = 4, stable with real A for 4 < t < 5, and stable with complex A for t > 5. A3 is unstable for all t > 0, because the trace is 2t.
13. (a) u’1 = cue -bu3, u’2 = -cu I +au3, u3 = but -au2 gives u1u1 +u2u2+u3u3 = 0. (b) Because ell is an orthogonal matrix, I I u (t) II2 = II eAtu (0) II2 = 11 u (0) II2 is
Solutions to Selected Exercises 459
.
a
AtS-1SenTS-1 = eAte AT
}NC
vim
r.,
460 Solutions to Selected Exercises
(c) A = 0 and +(/2 44-2 + c2)i. Skew-symmetric matrices have pure imaginary JA’s.
1 15. u(t) = 2 cos2t I-1 + 1 cos 16-t
1
L1
17. Ax =)Fx+))2xor (A-AF-).2I)x=0.
19. Eigenvalues are real when (trace)2 – 4 det > 0 = -4(-a2 – b2 + c2) > 0 = a2+b2>c2.
21. u l = e u2 = et -1 If u(0) = (5, -2), then u(t) = 3e 4t I0′] + let [_i] 0
(5 ± 41).
(1, 3). That gives y = ce3t, y’ = 3e3t. Also teat solves y” = 6y’ – 9y.
29. y (t) = cost starts at y (0) = 1 and y'(0) = 0. The vector equation has u = (y, y’) = (cost, -sin t).
31. Substituting u = e`tv gives ce`ty = Aectv – e`tb, or (A – cI)v = b, or v = (A – cI) -1 b =particular solution. If c is an eigenvalue, then A – cI is not invertible: this v fails.
33. de At/dt=A+A2t+2A3t2+1-A4t3+…=A(I+At+ZA2t2+1-!A3t3+…)_ AeAt
constant.
23. y/’/] =
25. Al =Oand)12=2.Nowv(t)=20+lOe2t ooast -* oo.
27. A = [-9 6] has trace 6, det 9, >v = 3 and 3, with only one independent eigenvector
t=0.
[4 5]
[y’]. Then A =
01] 0
1 11 13 0 i
e)]
39. A = [0 2
then eAt = [0
t
3] – [2
–
(b) If Ax = Ax then eAtx = eAtx and ext
41. (a) The inverse of eAt is
43. A = 2 and 5 with eigenvectors
Problem Set 5.5, page 288
0] [0
‘
[i] and [i] . Then A = SAS-1 = [-3
e-At.
2
35. The solution at time t + T is also eA(t+T)u(0). Thus eAt times eAT equals
eA(t+T).
– I at 0.
8′
37. If A2 = A then eAt = I + At + ZAt2 + 1-!At3 +
= I + (et – 1)A
(e3e3t
[et [et
[1O 0]
1 + 01 et01]-
1]
1
1 z
1. (b) sum = 4 + 3i; product = 7 + i. (c) 3 + 4i = 3 – 4i; 1 – i = 1 + i ; 13 + 4i I = 5; 11 – i I = A/2-. Both numbers lie outside the unit circle.
3. x = 2 – i, xx = 5, xy = -1 + 7i, 1/x = 2/5 – (1/5)i, x/y = 1/2 – (1/2)i; check that Ixyl = 50 = Ixilyl and Il/xI = 1//’ = 1/Ixi.
5. (a) x2 = r2ei20, x-1 = (1/r)e-i8, x = re-`8; x-1 = x gives x12 = 1: on the unit circle.
et 1
6
NIA-
(^J
,r1
2 i -i
7. C = -i 0 [i 0 1] = -i 1 0 , CH = C because (AHA)H = AHA.
-i r1 i 0
01 i01
9. (a) det AT = det A but det AH = det A. (b) AH = A gives det A = det A = real.
11. P : ,kl = 0, -2 = 1, xl = 1/^ x2 = 1/ 2 Q : ;I1 = 1, )2 = -1, x1 = X2 = / X2 = L-2//] .
11/V/2
13. ‘(a) u, v, w are orthogonal to each other. (b) The nullspace is spanned by u; the left nullspace is the same as the nullspace; the row space is spanned by v and w; the column space is the same as the’row space. (c) x = v + w; not unique, we can add any multiple of u to x. (d) Need bTU = 0. (e) S-1 = ST; S-‘AS =
diag(0, 1, 2).
15. The dimension of S is n (n + 1) /2, not n. Every symmetric matrix A is a combination of n projections, but the projections change as A changes. There is no basis of n fixed projection matrices, in the space S of symmetric matrices.
17. (UV)H(UV) = VHUHUV = VHIV = I. So UV is unitary.
19. The third column of U can be (1, -2, i)//, multiplied by any number e1B
21. A has + 1 or -1 in each diagonal entry; eight possibilities.
23. Columns of Fourier matrix U are eigenvectors of P because PU = diag(1, w, w2, w3)U (and w = i).
25. n2 steps for direct C times x; only n log n steps for F and F-1 by FFT (and n for A).
31. P2 =
1 0 0 , 010
P3 = I
P1oo = P99P
= P; X = cube roots of 1 = 1,
2+5+4
2 + 5e2ni/3 + 4e4ni/3 2 + 5e4ni/3 + 4e8ni/3
1+i 1j’
35. A
l+
11
I2
0 -1
-1-i 1
-1-i -il+i
e2ni/3 e4ni/3
2 5 4
1
2 0 1+i
27. AHA = 0 2 1 + i and AAH = [1
(AHA)H = AHAHH = AHA again.
29. cA is still Hermitian for real c; 001
(i A)H = -i AH = -i A is skew-Hermitian.
33. C = 4 2 5 4
=- K=(iAT)=[1-i
11
0 1 1
5 =2+5P+4 p2 has 7,(C) = 2
-1+i] 1
]-
2i 0]
j[0
1-i 1
Solutions to Selected Exercises 461
3
3
are Hermitian matrices.
1
t(7 .ti
e-1
moo
ti.
t-.
moo X77
462 Solutions to Selected Exercises
1L [1 -1+i 37. V=L [1+v
[10 _1O1
1 1+- 1i 1 ,/3-
]with
V = V11 gives-real A, unitary gives IAI = 1, then trace zero gives A = 1, -1.
43. [1] and [-1];
a
b +a cj
with a2 + b2 + c2 = 1.
[it
49. A=
[2+2i
1+i
39. Don’t multiply e` times e”; conjugate the first, then fo e2`x dx = [e2`x/2i ]o = 0. 41. R + i S = (R + i S)H = RT – i ST ; R is symmetric but S is skew-symmetric.
is
[b
45. (I – 2uuH)H = I – 2uuH; (I -.12uuH)2 = I – 4uuH + 4u(uHU)uH = I; the matrix uuH projects onto the line through u.
47. We are given A+iB=(A+iB)H=AT-iBT.Then A=ATandB=-BT.
1
1 2 i] [0 41 6 1
2]
1. C = N-1BN = N-1M-‘AMN = (MN)-‘A(MN); only M-1IM = I is similar to I.
3. If ;,1, … , A,, are eigenvalues of A, then Al + 1, … , An + 1 are eigenvalues of A + I. So A and A + I never have the same eigenvalues, and can’t be similar.
5. If B is invertible, then BA = B(AB)B-1 is similar to AB.
7. The (3, 1) entry of M-1 AM is g cos 9 + h sin 0, which is zero if tan 0 = -g/ h. 9. The coefficients are cl = 1, c2 = 2, d1 = 1, d2 = 1; check Mc = d.
11. The reflection matrix with basis v1 and v2 is A = [1
o]. The basis V1 and V2 (same MBM-1.
then A =
1 1reflection!) gives B = 0
-10].
[1 If M =1 I
-lj 1
010 000
i +
= SAS-1. Real eigenvalues I and4.
Problem Set 5.6, page 302
13. (a) D = 0 0 2 (b) D3 = 0 0 0 = third derivative matrix. The third 000 000
derivatives of 1, x, and x2 are zero, so D3 = 0. independent eigenvector (1, 0, 0).
15. The eigenvalues are 1, 1, 1, -1. Eigenmatrices 10
(c) A = 0 (triple); only one 01
f
Oj [0
diagonal entries (the eigenvalues) must have absolute value 1. Then all off-diagonal entries are zero because the columns are to be unit vectors.
19. The 1, 1 entries of T H T = TTH give It,, IZ = It,, I2 + It12I2 + It13I2 so t12 = t13 = 0. Comparing the 2, 2 entries of THT = T T H gives t23 = 0. So T must be diagonal.
21. If N = UAU-1, then NNH = UAU-1(U-1)HAHUH is equal to UAAHUH. This is the same as UAAAUH = (UAU’1)H(UAU-1) = NHN. So N is normal.
01′ [1
17. (a) TTH = U-1A UUHAH(U-1)H = I. (b) If T is triangular and unitary, then its
Olj9
[-O1
1′
q9.
II,
t-+
.-a
.-+ 000,
4-,
23. The eigenvalues of A (A – I) (A – 21) are 0, 0, 0.
25.Always
a2 + be ab + bd ac + cd be + d2]
ra b -(a+d) Lc
+d(ad-bc)
1 0
0 _ 0 0
27. M-1 J3 M = 0, so the last two inequalities are easy. Trying for MJl = J2 M forces the first column of M to be zero, so M cannot be invertible. Cannot have Jl = M-1 J2 M.
29. A1o = 210
61 45 -80 -59;e
A =e2 [00
13 9 -16 -11
11 0][11 O1 31 [01 0′ [10 1′ 0′
itself.
1 0] y [0 11 y are similar; 0 1 b itself and 1 b
33. (a) (M-1AM)(M-lx) = M-1(Ax) = M-10 = 0. (b) The nullspaces of A and of M-1 AM have the same dimension. Different vectors and different bases.
[ek0
[C20 2 [c’0 c2]’ J3-
35. JZ-
37. w(t)_(w(0)+tx(0)+It2y(0)+1t3z(0))e51
26
39. (a) Choose M; = reverse diagonal matrix to get M[‘ JiM, = MT in each block (b) M0 has those blocks Mi on its diagonal to get Mo 1 JM0 = JT.
(c) AT = (M-i)TJTMT is (M-1)TMo 1 JMOMT = (MMOMT)-IA(MMOMT), and AT is similar to’A.
41. (a) True: One has A = 0, the other doesn’t. (b) False. Diagonalize a nonsym-
metric matrix and A is symmetric. (c) False: [ 0 ®] and [0 ] are similar.
(d) True: All eigenvalues of A + I are increased by 1, thus different from the eigenvalues of A.
43. Diagonals 6 by 6 and 4 by 4; AB has all the same eigenvalues as BA plus 6 – 4 zeros.
Problem Set 6.1, page 316
1. ac – b2 = 2 – 4 = -2 < 0; x2 + 4xy + 2y2 = (x + 2y)2 - 2y2 (difference of squares).
3. det(A-),I)=k2-(a+c).k +ac-b2=0gives kl=((a+c)+
(a cc)2 + b2)/2 and A2 = ((a + c) - (a - c)2 + 4b2)/2); Al > 0 is a sum of positive numbers; A2 > 0 because (a + c)2 > (a – c)2 + 4b2 reduces to ac > b2.
Better way: product )11)12 = ac – b2.
5. (a) Positive definite when -3 < b < 3.
3c2]' Jk=
kck 1]' J0-I' J 1- [c 01 c-1
Solutions to Selected Exercises 463
]
1] [0 0]
1] 10
-
11
[b [0 9 0 b2] [0
(c) The minimum is
2(9 1 b2)
(b) [b 9]
when [b
y -* oo, x = -3y, then x - y approaches -oo.
b] ®]
[y] = [o], which is 1YI = 1
[-b].
(d) No minimum, let
1]'
9 b2
'0h
cry
'^U
-U.
464 Solutions to Selected Exercises
1 -1-1 and A2 = -1 2 -2 -1-211
(b) fl = (x1 - x2 - x3)2 = 0 when xl - x2 - x3 = 0.
7. (a) Al = -1
1 -1-1
11 -11 1
(c) f2 = (xl - x2 - x3)2 + (x2 - 3x3)2 + x3 ;L=
1 0]
100 -1 1 0 -1 -3 1
9. A -
]; the coefficients of the squares are the
0 4] [0 1
pivots in D, whereas the coefficients inside the squares are columns of L.
11. (a) Pivots are a and c - Ib12/a and detA = ac - lb12. (b) Multiply 1x212 by (c - I b12/a). (c) Now xHAx is a sum of squares. (d) det = -1 (indefinite) and
det = + 1 (positive definite).
13. a> 1 and (a – 1)(c – 1) > b2. This means that A – I is positive definite.
15. f(x, y) =x2+4xy+9y2 = (x+2y)2+5y2; f(x, y) = x2+6xy+9y2 = (x + 3y)2.
17. xTATAx = (Ax)T (Ax) = length squared = 0 only if Ax = 0. Since A has indepen- dent columns, this only happens when x = 0.
4 -4 8
19. A = -84 4 -8 has only one pivot = 4, rank = 1, eigenvalues 24, 0, 0, -8 16
detA = 0.
21. axe + 2bxy + cy2 has a saddle point at (0, 0) if ac < b2. The matrix is indefinite O,
[6
16] [2
11
Problem Set 6.2, page 326
1. A is positive definite for a > 2. B is never positive definite: notice
114
7
I.
3. det A = -2b3 – 3b2 + 1 is negative at (and near) b = 3
5. If xTAx > 0 and xTBx > 0 for any x ; 0, then XT(A + B)x > 0; condition (I).
7. Positive X’s because R is symmetric and N/A > 0. R = [1
3]’ R =
3].
9. 1xTAy12 = IxTRTRy12 = I(Rx)TRy12 < (by the ordinary Schwarz inequality) IIRx11211Ry112 = (xTRTRx)(yTRTRy) _ (xTAx)(yTAy)
11. A = [- 2] has A = 1 and 4, axes 1 [,]and 2 [-1 along eigenvectors.
13. Negative definite matrices: (I) xTAx < "0 for all nonzero vectors x. (II) All the eigenvalues of A satisfy A < 0. (III) det Al < 0, det A2 > 0, det A3 < 0.
JJJ
1-31
L1.
(IV) All the pivots (without row exchanges) satisfy d < 0. (V) There is a matrix R with independent columns such that A = -RTR.
15. False (Q must contain eigenvectors of A); True (same eigenvalues as A); True (QTAQ = Q-'AQ is similar to A); True (eigenvalues of e-A are 0).
17. Start from all _ (row j of RT) (column j of R) = length squared of column j of R. Then det A = (det R)2 = (volume of the R parallelepiped)2 < product of the
41. det
6-4,%/18 [-3-X/18
54 6-4?/18] = 5
lengths squared of all the columns of R. This product is a,Ia22
ann.
is singular;
r 2 -1 0
19. A = -1 2 -1 0 -1 2
10 A 1 =0 10
has pivots 2, 2 ,
3 ; A
=
2 -1 -1
-1 2 -1 -1 -1 2
21. xTAx is not positive when (xl, x2, x3) = (0, 1, 0) because of the zero on the diagonal. ,
23. (a) Positive definite requires positive determinant (also: all X > 0). (b) All pro- jection matrices except I are singular. (c) The diagonal entries of D are its eigen-
values. (d) The negative definite matrix -I has det = +1 when n is even.
25. ?1 = 1/a2and))2 = 1/b2,soa = 1/ T1 andb = 1/.,/.-X2. The ellipse 9X2 + 16 y2 1hasaxeswithhalf-lengthsa=3andb=.1]hasCCT=
[31
27. A = [3
29. ax2+2bxy+cy2a(x+Qy)2+acabzy2;2×2+8xy+10y2=2(x+2y)2+2y2.
2] [0 2]; C = [4
5] =
1×3)2 3 TBx 31. xTAx = 2(X1 – 12×2 – 2 + 2 (xa – x3)2, . x
pivot.
33. A and CT AC have X, > 0, ).2 = 0. C() t 1
= (xl + x2 + x3)2 B has one
1
R = [2 0]; C has one positive and one negative eigenvalue, but I has two positive
eigenvalues.
35. The pivots of A – 2I are 2.5, 5.9, -0.81, so one eigenvalue of A – Then A has an eigenvalue smaller than 2.
I is negative.
i
37. rank(CAC) crank A, but also rank(CTAC) >rank((CT)-1CTACC-1) = rank A.
39. No. If C is not square, CT AC is not the same size matrix as A.
Eigenvectors
-3 -)x/18
Solutions to Selected Exercises 465
=tQ -+ – (1-t)QR,Q=
10 0 -1′
= 0 gives X1 = 54, X2 =
43. Groups: orthogonal matrices; e” for all t; all matrices with det = 1. If A is positive definite, the group of all powers Ak contains only positive definite matrices.
148 8 251′
1-t
NCO
‘-‘
pea
COY
bra
466 Solutions to Selected Exercises
Problem Set 6.3, page 337
5
1. A T A = 20
20
80] has only cr = 85 with v1 = [4/
17 ‘ so v2 =
4/ 77
3.
ATA = [1
3 +
has eigenvalues (TI = and v2 =
15. A+
4 B 0 1 1 0 0 0 1 0 B+_ ® O C+_
20 20
Since A = AT, the eigenvectors of ATA are the same as for A. Since 2 =
is negative, a1 = )1 but o”2 = The unit eigenvectors are the same as in
2 Section 6.2 for A, except for the effect of this minus sign (because we need Av2
0-2 U2):
5. AAT =
ATA =
[2 1] 12
has a1 = 3 with u1 =
and nullvector v3 =
u1=V1=
and u2 = -V2 =
Then [0 = [u1 U2] 1 Ol]
0] [v1 v2 [ 0 10 0
17. ATA = [16 to obtain S =
16] = 2 [1
[1 -1 [20
1]
16] [-1
1]’ take square roots of 4 and 16
3] and Q = AS-1 =
-i- 4
1//
has 61 = 3 with v1 = 2// 1/6
1/ /
6z = 1 with v2 =
]T.
V3
7. A = 12 uvT has one singular value of = 12.
9. Multiply UEVT using columns (of U) times rows (of E VT).
11. To make A singular, the smallest change sets its smallest singular value a2 to zero.
13. The singular values of A + I are not 63 + 1. They come from eigenvalues of (A+I)T(A+I).
4′ [1 0] [® 1 0] 0 0 1 1
0 ®
4-
A+ is the right-inverse of A; B+ is the left-inverse of B.
[40
[_31
3].
10
2
4] [-1 1]
=
19. (a) With independent columns, the row space is all of Rn; check (ATA)A+b = ATb. (b) AT (AAT)-‘b is in the row space because AT times any vector is in that space; now (ATA)A+b = ATAAT(AAT)-lb = ATb. Both cases give ATAx+ = ATb.
3-,/-5 22
1;12/V/-1
1/lE z
and 62 = 1 with u2 =
[31
,
-1/
17
(1- -.,15-)
acv acv
\\\
.-4
Aim
NIA-`
WW1
`”‘
11
21.TakeA=[1
i
z
0j’-
[0 01 __
.- [i 11-
..
we have A+ =
23. A = Q1EQ2 = A+ = QzE+
= AA+ =
0 [0
l0
12J
Q1EE+Ql. Squaring gives (AA+)z = QiEE+EE+QT = QiEE+QT. So we have projections: (AA+)z = AA+ = (AA+)T and similarly for A+A. AA+ and A+A project onto the column
i space and row space of A.
Problem et 6.4, page 344
1. P (X) =/x; A- x1x12 4×1 – 4×3 has a P/8×1 = 2×1 – x2 – 4, aP/8×2 xi + 2×2 x3[and 8P7″dx3 = -x2 + 2×3 – 4.
3. aPi/ax=x+y=0andaPi/ay=x+2y-3=0give x=-3andy r=3.P2
has no minimum (let y oo). It is associated with the semidefinite matrix I 1
0]’
5. Put x = (1, … , 1) in Rayleigli’s quotient (the denominator becomes n). Since R (x) is always between X1 and X, we get n? 1 < xT Ax = sum of all aij < nXn.
7. Since xT Bx > 0 for all nonzero vectors x, xT (A + B)x will be larger than xT Ax. So the Rayleigh quotient is larger for A + B (in fact all n eigenvalues are increased).
9. Since xT Bx > 0, the Rayleigh quotient for A + B is larger than the quotient for A. 11. The smallest eigenvalues in Ax = X x and Ax = X Mx are 1 and (3 – )/4.
13. (a) ? = minsj [maxi i sj R(x)] > 0 means that every Si contains a vector x
with R(x) > 0. (b) y = C-lx gives quotient R(y) = YTCTACy _ xTAx =
Solutions to Selected Exercises
467
R(x)>0.
15. The extreme subspace S2 is spanned by the eigenvectors x1 and X2-
17. If Cx = C(A-1b) equals d then CA-lb – d is zero in the correction term in equation (5).
Problem Set 6.5, page 350
2 -1 0
1. Ay = b is 4 -1 2 -1 0 -1 2
3/16 1/2
4/16 = b = 1/2 3/16 1/2
. The linear finite element
16 at the nodes x = 4 , 2 , 4
U = 16 Vl + 16 V2 + 16 V3 equals the exact u = 16 , 16 ,
2-10 2 1
3. A33 = 3, b3 = 3. Then A = 3 -1 2 -l , b = – 2 , Ay = b gives
0 -1
3 11
YTY
xTx
coq
.-.IN
MIA
468 Solutions to Selected Exercises
5. Integrate by parts: f – Vi”Vj dx f V’V dx – [VI Vj]x=o = f o V’V dx =
same A11.
oo
7. A = 4, M = 3. Their ratio 12 (Rayleigh quotient on the subspace of multiples of V (x)) is larger than the true eigenvalue A = r2.
9. The mass matrix M is h/6 times the 1, 4, 1 tridiagonal matrix.
Problem et 7.2, page 357
1. If Q is orthogonal, its norm is 1 1Q 1 1 = max II Qx II / II x ll = 1 because Q preserves length: II Qx II = IIx Il for every x. Also Q-1 is orthogonal and has norm one, so c(Q) = 1.
3.
11 ABx II < 11 A 1111 Bx 11, by the definition of the norm of A, and then II Bx II
Dividing by 1 1 x 1 1 and maximizing, I I AB I I < II A II II B II . The same is true for the inverse, IIB-'A-'II < IIB-1llllA-III; c(AB) < c(A)c(B) by multiplying these inequalities.
5. In the definition IIAII = max II Ax II / 11 x 11, choose x to be the particular eigenvector in question; I Ax I I = I; I I I x I I , so the ratio is 1A and maximum ratio is at least I A I .
7. A T A and AAT have the same eigenvalues, since ATAx = Ax gives AAT(Ax) _ A(ATAx) = A(Ax). Equality of the largest eigenvalues means 11A11 = IIATII.
9. A = [ 0 1], B = [0 0], Amax (A + B) > Amax (A) + Amax (B) (since 1 > 0 + 0), and Am (AB) > Amax(A)Amax(B). So Amax(A) is not a norm.
11. (a) Yes, c(A) = IIAII IIA-111 = c(A-1), since (A-1)-1 is A again. (b) A-‘b = x
leads to
Ilsbll
Ilbll
1
Ilsxll
11×11
This is
118x” >
Ilxli
I hh8bll
c Ilbil
13. II A ll = 2 and c = 1; IIAII =
and c is infinite (singular!); 11 A ll = / and c = 1.
< II A II IIA
11
15. If Amax = Amin = 1, then all Ai = 1 and A = SIS-1 = I. The only matrices with IIAII = IIA-1 II = 1 are orthogonal matrices, because ATA has to be I.
17. The residual b - Ay = (10-1, 0) is much smaller than b - Az = (.0013,.0016). But z is much closer to the solution than y.
19. xi + + xn is not smaller than max(x?) =
and not larger than + xn < n max(x?), so x y = llx ll l By
+ Ixnl)2, which is (Ilxlll)2 Certainly xi + llx ll < IIx 1j.. Choose y = (sign xl, sign x2, ... , sign
(1x11 +
Schwarz, this is at most II x II Il y ll = ll x ll . Choose x = (1, 1, ... , 1) for maximum ratios In-.
9 -36 30 21. The exact inverse of the 3 by 3 Hilbert matrix is A-1 = -36 192 -180 30 -180 180
23. The largest IIx II = IIA-1b11 is 1/Arvin; the largest error is 10-16/Amin
[21 2 2 1 25. Exchange 12 -11 = U with P = [01
22] to 0] [0
] and
11 B 11 IIx II .
L= [5
1020
r
2 2 0 __, 2 2 0 2 2 0
0].A- LL
101 0-1
020 In20 0-1
220 010100
0 2 0 U. Then PA = LU with P = 0 0 1
and L=
0 1 0 .5 -.5 1
normalized to unit vector.
1 0 0
1. uo = 01' u 1 = -1 ,u2 = 4 ' u3 = [-13]; u°° 11 2
0 0 1
7. U = [, cos D
9'
ThenRQ=
(QT
... Qo A Qo
= U-1 and then U-1AU =
-5 0
9 12 25 25 12 16 25 25
sin D_ [sin D 0
_
-sin D cos 0 ]
cos D sin D -sin2 D '
11. Assume that (Qo
. Ro) is the QR factorization of Ak
H]
cos D QR - [sin 0
[c(i+s2)-23]. s
-s -s c
Qk_1)(Rk_I . .
(certainly true if k = 1). By construction, Ak+1 = Rk Qk, so Rk = Ak+I Qk =
1 12
-3 14 11 and PAP-' =
2
1 12 0
0
4 °11
L° 2 °J
84
-11 and
PAP-1 =
4] or
1 10 [3
2].
11
15. P1 A uses 4n multiplications (2 for each entry in rows i and j). By factoring out
cos D, the entries 1 and f tan 0 need only 2n multiplications, which leads to 3 n3 for PR.
Problem Set 7.4, page 372
1. D-1(-L - U) = 2 0 , ° 2 °2l
eigenvalues it = 0, ±1//; (D + L)-'(-U) _
2
I , eigenvalues 0, 0, 1/2; coops = 4-
reducing X,,,,
to 3 -2A/2- 0.2.
Solutions to Selected Exercises 469
[01
Qk) Q. Postmultiplying by (Rk_I .. Re), the assumption gives
Rkk...Ro=QT...QTAk+1.AftermovingtheQ'stotheleft-handside,thisisthe
ko required result for Ak+1
13. A has eigenvalues 4 and 2. Put one unit eigenvector in row 1 of P: it is either
1
Problem Set 7.3, page 365
1 5 14 1
1 1
3. ukp.I k clxiifall ratios 1;,j/)1I < 1.
The largest ratio controls, when k is large. A = [O 1] has P)2! = JXI I and no convergence.
5. Hx = x - (x - y) x = Hy.
2(x - y)Tx
T
(x - y)(x-y)
x - (x - y) = y. Then H(Hx) = Hy is
1 -5 0
n-+
MI'- '.y
4-i0
-100
470 Solutions to Selected Exercises
3. Axk = (2 - 2 cos knh)xk; JXk = 1(sin 2kirh, sin 3k3rh + sin knh, ...) _
nn
A has eigenvalues 2 - 2 cos = 2 - , 2 - cos = 2,
442
(cos kn h )xk. For h = 3n
2-cos 4 =2+v2.
5. J=D-1(L+U)=- 0 0 4
r3 =
4
. Their centers are at zero, so all IXi I < 4/5 < 1.
_0/aJ 1/2 r1
7. -D-1(L + U) = I-c/d has μ = (ad)
-(D +L)-'U1=
r0 c/-balda 0 be/a '
A = 0, be/ad; ),.,,x does equal μm ax.
01 33
22 55
0
the three circles have radius r1 =
21 , r2 =
3 4'
9. If Ax = Ax, then (I - A)x = (1 - A)x. Real eigenvalues of B = I - A have 11 - XI < 1, provided that A is between 0 and 2.
11. Always 1 1 A B I I < I I A 11 11 B I I . Choose A = B to find 11 B211 < II B 112 Then choose
211
A = B2 to find 1 I B311 < J I B
11 B II > max 11,(B) I, it is no surprise that II B II < 1 gives convergence.
0 13. Jacobi has S 1T = 3 101 with 3 Gauss-Seidel has S-1T = 0
II B 11 < 11 B 113. Continue (or use induction). Since
i 19
= (I A Imax for Jacobi)2.
15. Successive overrelaxation (SOR) in MATLAB.
17. The maximum row sums give all Al I< .9 and IAI < 4. The circles around diagonal entries give tighter bounds. First A: The circle IX -.21 < .7 contains the other circles IA - .31 < .5 and IA - .11 < .6 and all three eigenvalues. Second A: The circle
IA - 21 < 2 contains the circle IA - 21 < 1 and all three eigenvalues 2 + 2, and
19. r1 = b - a1Ab = b - (bTb/bTAb)Ab is orthogonal to ro = b: the residuals r = b - Ax are orthogonal at each step. To show that p1 is orthogonal to Apo = Ab, simplify pl to cPj: P1 = IlAb112b - (bTAb)Ab and c = bTb/(bTAb)2. Certainly (Ab)TP1 = 0, because AT = A. (That simplification put al into pl = b - a,Ab + (bTb - 2aibTAb + ai IIAb112)b/bTb. For a good discussion see Numerical Linear Algebra by Trefethen and Bau.)
Problem Set 8.1, page 381
1. The corners are at (0, 6), (2, 2), (6, 0); see Figure 8.3.
3. The constraints give 3(2x + 5y) + 2(-3x + 8y) < 9 - 10, or 31y < -1. Can't
have y > 0.
5. x > 0, y > 0, with added constraint that x + y <- 0 admits only the point (0, 0). 7. x (5% bonds) = z (9% bonds) = 20,000 and y (6% bonds) = 60,000.
with lA l max =
9
1
1-+
SIN
9. The cost to be minimized is 1000x + 2000y + 3000z + 1500u + 3000v + 3700w. The amounts x, y, z to Chicago and u, v, w to New England satisfy x + u =
1,000,000; y + v = 1,000,000; z + w = 1,000,000; x + y + z = 800,000; u + v + w = 2,200,000.
Problem Set 8.2, page 391
1. At present x4 = 4 and x5 = 2 are in the basis, and the cost is zero. The entering variable should be x3, to reduce the cost. The leaving variable should be x5, since 2/1 is less than 4/1. With x3 and x4 in the basis, the constraints give x3 = 2,X4= 2, and the cost is now xl + x2 - x3 = -2.
3. The "reduced costs" are r = [I I], so change is not good and the corner is optimal.
5. At P, r = [-5 3]; then at Q, r = [3 -1]; R is optimal because r > 0.
7. For a maximum problem the stopping test becomes r < 0. If this fails, and the i th component is the largest, then that column of N enters the basis; the rule 8C for the vector leaving the basis is the same.
9. BE=B[... v ...] = [... u ...], since By = u. So E is the correct matrix. 11. If Ax = 0, then Px = x - AT(AAT)-'Ax = x.
Problem Set 8., page 399
1. Maximize 4y1 + 11y2, with yi > 0, y2 0, 2y1 + Y2 < 1, 3Y2 < 1; the primal has x* = 2, x2 = 3, the dual has y* = 3, y2 = 3, cost = 5.
3. The dual maximizes yb, with y > c. Therefore x = b and y = c are feasible, and give the same value cb for the cost in the primal and dual; by 8F they must be optimal. If b1 < 0, then the optimal x* is changed to (0, b2, ..., b,) and y* _ (0, c2, ... C,)-
5. b = [0 1]T and c = [-1 0].
7. Since cx = 3 = yb, x and y are optimal by 8F.
9. x* = [1 01T, y* [1 01, with y*b = 1 = cx*. The second inequalities in both Ax* > b and y*A < c are strict, so the second components of y* and x* are zero.
11. (a) x* = 0, x2 = 1, x3 = 0, cTx = 3. (b) It is the first quadrant with the tetrahedron in the corner cut off. (c) Maximize yl, subject to i > 0, i < 5, y1 :5 3,yi <4;y*=3.
13. Here c = [ 1 1 1 ] with A = [ 2 1 01. No constraint x > 0 so the dual will
have equality yA = c (or ATy = CT). That gives 2y1 = 1 and yi = 1 and y2 = 2 and no feasible solution. So the primal must have oc as maximum: x1 = -N and
x2 = 2N and x3 = 0 give Cost = x1 + x2 + x3 = N (arbitrarily large). 1 0 0 -1 0 0 1 0 0 -1
15. The columns of 0 1 0 0 -1 0 or 0 1 0 -1 . 0 0 1 0 0 -1 0 0 1 -1
17. Takey=[1 -1];then yA>0,yb <0.
Solutions to Selected Exercises 471
III
III
mar"
-tea
r--
472 Solutions to Selected Exercises
Problem Set 8.4, page 406
1. The maximal flow is 13, with the minimal cut separating node 6 from the other nodes.
3. Increasing the capacity of pipes from node 4 to node 6 or node 4 to node 5 will produce the largest increase in the maximal flow. The maximal flow increases from
8to9.
5. Assign capacities = 1 to all edges. The maximum number of disjoint paths from s to t then equals the maximum flow. The minimum number of edges whose removal
disconnects s from t is the minimum cut. Then max flow = min cut.
7. Rows 1, 4, and 5 violate Hall's condition; the 3 by 3 submatrix coming from rows 1, 4, 5, and columns 1, 2, 5 has 3 + 3 > 5.
9. (a) The matrix has 2n is which cannot be covered by less than n lines because each line covers exactly two Is. It takes n lines; there must be a complete matching.
11111
10001
(b) 1 0001 10001 11111
not possible.
. The is can be covered with four lines; five marriages are
11. If each m + 1 marries the only acceptable man m, there is no one for #1 to marry (even though all are acceptable to #1).
13. Algorithm 1 gives 1-3, 3-2, 2-5, 2-4, 4-6, and algorithm 2 gives 2-5, 4-6, 2-4, 3-2, 1-3. These are equal-length shortest spanning trees.
15. (a) Rows 1, 3, 5 only have 1 s in columns 2 and 4. (b) Columns 1, 3, 5 (in rows 2, 4). (c) Zero submatrix from rows 1, 3, 5 and columns 1, 3, 5. (d) Rows 2, 4 and columns 2, 4 cover all Is.
Problem Set 8.5, page 413
1. -lox, +70(1-xi) = lox, -10(1-x1),orx1 = 5,×2 = -10y1+10(l-yl) 70y1 – 10(1 – yl), or yl = 1, y2 = 1; average payoff yAx = 6.
55
3. If X chooses column j, Y will choose its smallest entry ail (in row i). X will not move, since this is the largest entry in that row. In Problem 2, a12 = 2 was an equilibrium of this kind. If we exchange the 2 and 4 below it, no entry has this property, and mixed strategies are required.
5. The best strategy for X combines the two lines to produce a horizontal line, guaran- teeing this height of 7/3. The combination is 3 (3y + 2(1 – y)) + (y +3(1 – y)) _
7/3, so X chooses the columns with frequencies
, 0, 3 .
3
7. For columns, wewant xta+(1 -xl)b =x1c+(1 -xl)d = u, so
x1(a – b – c + d) = d – b. For rows, y1a+(1-yl)c=y1b+(1-yl)d=v exchanges b and c. Compare u with v:
(a – b) (d – b) + b _ ad – be
u = x1 (a – b) + b = a-b-c+d a-b-c+d = same after b H c = v.
000
9. The inner maximum is the larger of yl and y2; x concentrates on that one. Subject to yt + y2 = 1, the minimum of the larger y is 2. Notice A = I.
11. Ax* = L2 11T and yAx* = Zyi + Zy2 = 2 for all strategies of Y; y*A = [2
-1 -11 and y*Ax = 2×1 + 2×2 – x3 – x4, which cannot exceed 2; in 2
between is y*Ax* = 2.
13. Value 0 (fair game). X chooses 2 or 3, y chooses odd or even: x* = y*
The column spaces have dimension 2. Their sum and intersection give 3+ 1 = 2 + 2.
Problem Set B, page 427
(2 1
010
1111 F2 F2 1 -1 1 -1
11. F2 ®F2 =
F2 -F2
13. A3D=(AID ®I(& 1)+(I®AID (9 1)+(I(g I®AID).
1. J = I 0 I (A is diagonalizable); J = 0 0 0
(eigenvectors (1, 0, 0) and
I JJ
(2, -1, 0))
1 t 2t
3. eB` = 0 1 0 000
100
5. J = 0 4 0 006
000
-1 -1 1 -1 -1 1
11
= I + Bt since B2 = 0. Also e” = I + P.
(distinct eigenvalues); J = 0
1
Solutions to Selected Exercises 473
B]x = 3 0 0 1
I (B has ,l = 0, 0 but rank 1). I )))
(2 2
Problem Set A, page 420
1. (a) Largest dim (S fl T) = 7 when S C T. (c) Smallest dim (S + T) = 8 when S C T. (d) Largest dim (S + T) = 13 (all of R13)
– 1k
(b) Smallest dim (S fl T) = 2.
3. V + W and V fl W contain
all a12 a13 a14
a21 a22 a23 a24 and
0a32a33a34 0 0 a43 a44
all a12 0 a22
0 0 a23 0
dim (V + W) = 13 and dim (V fl W) = 7; add to get 20 = dim V + dim W. 5. The lines through (1, 1, 1) and (1, 1, 2) have V fl W = {0}.
7. One basis for V + W is v1, v2, w1; dim (V fl w) = 1 with basis (0, 1, -1, 0). 9. The intersection of column spaces is the line through y = (6, 3, 6):
Y= 1 5 r11 3 0 301 01
2 4 LLL JJJJJJ 0 2
1 5 3 0
2
[3 matches [A
-2 – 0. 2 4 0 2 -3
0 0a23a34 0 0 0 a44
O¢..
4″”
.-goo
IAN
-1,
eel
C/1
Cdr
000
nod
O.~
MOO
Matrix factorizations
( U=lower triangular L ( upper triangular U 1 s on the diagonal pivots on the diagonal
1. A= L
Requirements: No row exchanges, as Gaussian elimination reduces A to U.
2. A= L D U – (
lower triangular L ( pivot matrix 1 ( upper triangular U Is on the diagonal ) D is diagonal) Is on the diagonal
Requirements: No row exchanges. The pivots in D are divided out to leave is in U. If A is symmetric, then U is LT and A = LDLT.
3. PA = LU (permutation matrix P to avoid zeros in the pivot positions). Requirements: A is invertible. Then P, L, U are invertible. P does the row ex- changes in advance. Alternative: A = L 1 P1 U1.
4. EA = R (m by m invertible E)(any A) = rref(A).
Requirements: None ! The reduced row echelon form R has r pivot rows and pivot columns. The only nonzero in a pivot column is the unit pivot. The last m – r rows of E are a basis for the left nullspace of A, and the first r columns of E-1 are a basis for the column space of A.
5. A = CCT = (lower triangular matrix C)(transpose is upper triangular). Requirements: A is symmetric and positive definite (all n pivots in D are positive). This Cholesky factorization has C = LI-D.
6. A = QR = (orthonormal columns in Q)(upper triangular R).
Requirements: A has independent columns. Those are orthogonalized in Q by the Gram-Schmidt process. If A is square, then Q-1 = QT.
7. A = SAS-1 = (eigenvectors in S)(eigenvalues in A)(left eigenvectors in S-1). Requirements: A must have n linearly independent eigenvectors.
8. A = QA QT = (orthogonal matrix Q)(real eigenvalue matrix A)(QT is Q’1). Requirements: A is symmetric. This is the Spectral Theorem.
9. A = MJM-1 = (generalized eigenvectors in M)(Jordan blocks in J)(M-1). Requirements: A is any square matrix. Jordan form J has a block for each inde- pendent eigenvector of A. Each block has one eigenvalue.
orthogonal m x n matrix E orthogonal 10. A = UE VT = (U is m x m Ql , … , yr on diagonal) (V is n x n )
.?~
cad
mall
mar.
°,o
4-o
Matrix Factorizations 475
Requirements: None. This singular value decomposition (SVD) has the eigenvec-
tors of AAT in U and of ATA in V ; cti (ATA) _
orthogonal (diagonal n x m
– ( nxn ) (1/cri,…,1/o’r) (
A (AAT).
orthogonal mxm
11. A+ = V E+UT –
Requirements: None. The pseudoinverse has A+A = projection onto row space of A and AA+ = projection onto column space. The shortest least-squares solution to Ax = b is x = A+b. This solves ATAx = ATb.
12. A = QH = (orthogonal matrix Q)(symmetric positive definite matrix H). Requirements: A is invertible. This polar decomposition has H2 = ATA. The factor H is semidefinite if A is singular. The reverse polar decomposition A = KQ has K2 =AA T . Both have Q = UVT from the SVD.
T 13. A = UAU-1 = (unitary U)(eigenvalue matrix A)(U-1 = UH = U ).
Requirements: A is normal: AHA = AA H. Its orthonormal (and possibly complex) eigenvectors are the columns of U. Complex ),’s unless A = AH.
14. A = UT U-1 ._ (unitary U)(triangular T with A’s on diagonal)(U-1 = UH). Requirements: Schur triangularization of any square A. There is a matrix U with orthonormal columns that makes U-‘AU triangular.
15. F,, = (I _D] [peven-odd ] = one step of the FFT. lI D L ermutation
Requirements: Fr, = Fourier matrix with entries wik where w” = 1, w = e2i`/”. Then F,, F = nI. D has 1, w, w2, … on its diagonal. For n = 2′ the Fast Fourier Transform has 1-Znt multiplications from £ stages of D’s.
gal v°,
III II.
ago
¢D.
matching blocks).
Cayley-Hamilton Theorem p(A) = zero matrix.
p(X) = det(A – Al) has
Complex conjugate z = a – i b for any complex num- berz=a+ib.Thenzz= 1z12.
Conditionnumbercond(A) = K(A) _ 11AI1 !1A-111 _ 6max/ffmin In Ax = b, the relative change 118x II / Ix 11 is less than cond(A) times the relative change 118b 11 / II b 11 Condition numbers measure the sensitivity of the output to change in the input.
Conjugate Gradient Method A sequence of steps to solve positive definite Ax = b by minimizing
1 T Ax – xTb over growing Krylov subspaces.
Covariance matrix E When random variables xi have mean = average value = 0, their covariances Eij are the averages of xi x1. With means Ti, the matrix E = mean of (x – x) (x – x)T is positive (semi)definite; it is diagonal if the xi are independent.
Cramer’s Rule for Ax = b B j has b replacing column j of A,andxj = 1B,I/IAI
Glossary: A Dictionary for Linear Algebra
Adjacency matrix of a graph Square matrix with ail = I when there is an edge from node i to node j ; otherwise ail = 0. A = AT for an undirected graph.
Affine transformation T(v) = A v + vp = linear trans- formation plus shift.
Circulant matrix C Constant diagonals wrap around as in cyclic shift S. Every circulant is col + ci S +
+ cn 1 Sn-1 Cx = convolution c * x. Eigenvectors in F.
Cofactor Ci j Remove row i and column j; multiply the determinant by (-1)i+j
Column picture of Ax = b The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C(A).
Associative Law (AB)C = A(BC) be removed to leave ABC.
Parentheses can
Augmented matrix [A b] Ax = b is solvable when b is in the column space of A; then [A b ] has the same rank as A. Elimination on [ A b ] keeps equations correct.
Back substitution Upper triangular systems are solved in reverse order xn to x1.
Basis for V Independent vectors v1, … , vd whose linear combinations give every v in V. A vector space has many bases!
Big formula for n by n determinants det (A) is a sum of n! terms, one term for each permutation P of the columns. That term is the product aia a, down the diagonal of the reordered matrix, times det (P) = ±1.
Block matrix A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication of AB is allowed if the block shapes permit (the columns of A and rows of B must be in
Column space C(A) columns of A.
Space of all combinations of the
Change of basis matrix M The old basis vectors v1 are combinations 71 mil wi of the new basis vectors. The coordinates
are related by d = Mc. (For n = 2, set v1 = m11wl + m21w2, V2 = m12w1
Characteristic equation det (A – II) = 0 roots are the eigenvalues of A.
Cholesky factorization A =CCT = l for positive definite A.
The n
l
Commuting matrices AB = BA If diagonalizable, they share n eigenvectors.
Companion matrix Put c l, … , c,2 in row n and put n – 1 1 s along diagonal 1. Then det (A – Al) _ ±(ci + c2X + C3 ),2 + …).
Complete solution x = xp + x,, to Ax = b , lar xp) + (xn in nullspace).
(Particu-
:~’
cue
°=’
;.+
Cross product u x v in R3 Vector perpendicular
to u and v, length
area, computed as
U1 U2 u3; vl V2
Cyclic shift S Permutation with s21 = 1, s32 = 1, …, finally sln = 1. Its eigenvalues are nth roots e2nikln of 1; eigenvectors are columns of the Fourier matrix F.
Determinant I A I = det (A) Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then JAI = O when A is singular. Also IABI = IAIIBI, IA-HI = 1/JAI, and IATI = AITl.he big formula for det (A) has a sum of n! terms, the cofactor formula uses
determinants of size n – 1, volume of box = I det (A) I.
Diagonal matrix D d11 = 0 if i 0 j. Block-diagonal: zero outside square blocks Di j.
Diagonalizable matrix A Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-1X9__= A = eigenvalue matrix.
Diagonalization A = S-1AS A= eigenvalue matrix and S = eigenvector matrix. A must have n independent eigenvectors to make S invertible. All Ak = SAkS-1.
Dimension of vector space dim (V) = number of vectors in any basis for .,
Distributive Law AA(I + C) = AB + AC Add then multiply, or multiply then add.
Dot productxTy = xlyl + +xnyn Complex dot product is TTy. Perpendicular vectors have zero dot prod- uct. (AB)i j = (row i of A) (column j of B).
Echelon matrix U The first nonzero entry (the pivot) in each row comes after the pivot in the previous row. All zero rows come last.
Eigenvalue a. and eigenvector x Ax = Ax with x o 0, sodet(A-Al)=0.
Eigshow Graphical 2 by 2 eigenvalues and singular values (MATLAB or Java).
Elimination A sequence of row operations that re- duces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers f1 j in L, or
PA = LU with row exchanges in P, or EA = R with an invertible E.
Elimination matrix = Elementary matrix Ei j The identity matrix with an extra -fi j in the i, j entry (i 0 j). Then Ei j A subtracts i j times row j of A from row i.
Ellipse (or ellipsoid) xTAx = 1 A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths I /VT. (For I I x II = 1 the vectors y = Ax lie
on the ellipse JJA-1yj12 = yT(AAT)-ly = 1 displayed by eigshow; axis lengths a1.)
Exponential eAt = I+At+(At)2/2!+ hasderiva- tive AeAt; eAtu(0) solves u’ = An.
Factorization A = L U If elimination takes A to U without row exchanges, then the lower triangular L with
multipliers £i j (and fii = 1) brings U back to A.
Fast Fourier Transform (FFT) A factorization of the Fourier matrix Fn into k = 1092 n matrices Si times a per- mutation. Each Si needs only n/2 multiplications, so Fnx and F,, lc can be computed with of/2 multiplications. Revolutionary.
Fibonacci numbers 0, 1, 1, 2, 3, 5…. satisfy Fn =
Fn_1 + Fn-2 = (A – z- A2). Growth rate (1
II u I I I I v I I J sin B I = parallelogram the “determinant” of [i j k;
v3].
Glossary 477
+ /5)/2 is the largest eigenvalue of the
Al =
Fibonacci matrix
Four fundamental subspaces of A C(A), N(A),
C(AT),
Fourier matrix F Entries Fjk = e2Jrijkln give orthog-
onal columns FTF = nI. Then Y = Fc is the (inverse) Discrete Fourier Transform y; = Y_ cke2nijkln
Free columns of A Columns without pivots; combina- tions of earlier columns.
Free variable xi Column i has no pivot in elimination. We can give the n – r free variables any values, then
Ax = b determines the r pivot variables (if solvable!).
Full column rank r = n Independent columns, N(A) = {0}, no free variables.
Full row rank r = m Independent rows, at least one solution to Ax = b, column space is all of R’n. Full rank means full column rank or full row rank.
Fundamental Theorem The nullspace N(A) and row space C(AT) are orthogonal complements (perpendicu- lar subspaces of Rn with dimensions r and n – r) from Ax = 0. Applied to AT, the column space C(A) is the orthogonal complement of N(AT).
Gauss-Jordan method Invert A by row operations on [A I] to reach [I A-1].
Gram-Schmidt orthogonalization A = QR Inde- pendent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention:
diag(R) > 0.
Graph G Set of n nodes connected pairwise by m edges. A complete graph has all n(n – 1)/2 edges between nodes. A tree has only n – 1 edges and no closed loops.
N(AT).
1 1
11
L
“`C
…
P’+
P..
o.^
w’~
C/1
-r<
I+w
°).
°1~
9100
s°.
-
Matrix multiplication AB The i, j entry of AB is (row i of A) (column j of B) = 7, aikbkj. By columns: column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that AB times x equals A times Bx.
Minimal polynomial of A The lowest-degree polyno- mial with m (A) = zero matrix. The roots of m are eigen- values, and m(X) divides det(A – Al).
Multiplication Ax = x1(column 1) + +xn(column n) = combination of columns.
Multiplicities AM and GM The algebraic multiplic- ity AM of an eigenvalue A is the number of times A appears as a root of det (A -),I) = 0. The geometric mul- tiplicity GM is the number of independent eigenvectors (= dimension of the eigenspace for A).
Multiplier Iir The pivot row j is multiplied by ti j and subtracted from row i to eliminate the i, j entry: £ij = (entry to eliminate)/(jth pivot).
Network A directed graph that has constants cl, … , cm associated with the edges.
yon
CD.
.+0
Mac
+
-I-
F”” Wig
fir:
.U.
s0.
70i
Nilpotent matrix N Some power of N is the zero matrix, Nk = 0. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
Norm II A I I of a matrix The -f2 norm” is the maxi- mum ratio IIAxII/IIx11 = Umax. Then llAxll < IIAIIIIxII,
row reduction; not combinations of earlier columns. The pivot columns are a basis for the column space.
Pivot d The first nonzero entry when a row is used in elimination.
Plane (or hyperplane) in R" Solutions to aTx = 0 give the plane (dimension n - 1) perpendicular to a 0.
Polar decomposition A = QH Orthogonal Q, posi- tive (semi)definite H.
Positive definite matrix A Symmetric matrix with positive eigenvalues and positive pivots. Definition: XT Ax > 0 unless x = 0.
Projection matrix P onto subspace S Projection p = Pb is the closest point to b in S, error e = b – Pb is perpendicular to S. P2 = P = PT, eigenvalues are 1 or 0, eigenvectors are in S or S-‘-. If columns of A = basis
IIABII < IIAIIIIBII,and11A+B11
nius norm 11 A 112 = a ; ft and f' largest column and row sums of l ai j I
norms are
Normal equation AT Ax = ATb Gives the least- squares solution to Ax = b if A has full rank n. The equation says that (columns of A) (b - Az) = 0.
Normal matrix N NNT = NTN, leads to orthonormal (complex) eigenvectors.
Nullspace matrix N The columns of N are the n - r special solutions to As = 0.
Nullspace N(A) Solutions to Ax = 0. Dimension n - r = (# columns) - rank.
Orthogonal matrix Q Square matrix with orthonormal columns, so QTQ = I implies QT = Q-1. Preserves length and angles, II Qx II IIx II and (Qx)T (Qy) = xTy. All Al = 1, with orthogonal eigenvectors. Examples:
Rotation, reflection, permutation.
for S, then P =
A(ATA)-1 AT.
Orthogonal subspaces every w in W.
Every v in V is orthogonal to
Projection p = a (aT b/aT a) onto the line through a P = aaT /aT a has rank 1.
Pseudoinverse A+ (Moore-Penrose inverse) The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+A and AA+ are the projection matrices onto the row space and column space. Rank(A+) = rank(A).
Random matrix rand(n) or randn(n) MATLAB cre- ates a matrix with random entries, uniformly distributed on [0 1] for rand, and standard normal distribution for randn.
Rank 1 matrix A = uvT o 0 Column and row spaces = lines cu and cv.
Rank r(A) Equals number of pivots = dimension of column space = dimension of row space.
Rayleigh quotient q(x) = xTAx/xTx For A = AT, Amin q(x) < Amax. Those extremes are reached at the eigenvectors, x for Amin (A) and Amax (A)
Reduced row echelon form R = rref(A) Pivots = 1; zeros above and below pivots; r nonzero rows of R give a basis for the row space of A.
Reflection matrix Q = I - 2uuT The unit vector u is reflected to Qu = -u. All vectors x in the plane uTx = 0 are unchanged because Qx = x. The "Householder
matrix" has QT = Q-1 = Q.
Right inverse A+ If A has full row rank m, then A+ = AT(AAT)-l has AA+ = I,".
Orthonormal vectors q1, ... , q" Dot products are qT q j = 0, if i 0 j and qT qi = 1. The matrix Q with these orthonormal columns has QT Q = I. If m = n, then QT = Q-1 and q1, ..., qn is an orthonormal basis for R": every v = E(vTgj)gj.
Outer product u vT column times row = rank-1 matrix.
Partial pivoting In elimination, the jth pivot is cho- sen as the largest available entry (in absolute value) in column j. Then all multipliers have tit I < 1. Roundoff error is controlled (depending on the condition number
of A).
Particular solution xp Any solution to Ax = b; often x p has free variables = 0.
I2).
Permutation matrix P There are n! orders oft, ..., n; then! P's have the rows of I in those orders. PA puts the rows of A in the same order. P is a product of row ex- changes P1; P is even or odd (det P = 1 or -1) based on the number of exchanges.
Pivot columns of A Columns that contain pivots after
Pascal matrix PS = pascal(n)
The symmetric matrix PS = PL PU all contain Pascal's triangle with det = 1 (see index for more prop-
erties).
with binomial entries (` i
rcos B
Rotation matrix R =
[sine
-sin O cose
rotates the
plane by 0, and R-1 = RT rotates back by -0. Orthogo- nal matrix, eigenvalues eie and e8, eigenvectors (1, ±i).
Glossary 479
a<®
arc
via
l0<
all
.°-h
010
s®.
'°3
,-.
tom.
III ago
7..
480 Glossary
Row picture of Ax = b Each equation gives a plane in R'; planes intersect at x.
Row space C(AT) All combinations of rows of A. Column vectors by convention.
Saddle point of f (xl, ... , xn) A point where the first derivatives off are zero and the second derivative matrix (a2 f/axi ax j = Hessian matrix) is indefinite.
Schur complement S = D - CA-1B Appears in
nodes in a discrete structure, Kx gives the internal forces. Often K = AT CA, where C contains spring constants from Hooke's Law and Ax = stretching (strains) from the movements x.
Subspace S of V Any vector space inside V, including V and Z = {zero vector).
Sum V + W of subspaces Space of all (v in V) + (w in W). Direct sum: dim(V + W) = dim V + dim W, when V and W share only the zero vector.
Symmetric factorizations A = LDLT and A = QA QT The number of positive pivots in D and pos-
itive eigenvalues in A is the same.
Symmetric matrix A The transpose is AT = A, and ai j = a ji . A-1 is also symmetric. All matrices of the form RTR and LDLT and QAQT are symmetric. Sym- metric matrices have real eigenvalues in A and orthonor- mal eigenvectors in Q.
Toeplitz matrix T Constant-diagonal matrix, so ti j depends only on j - i. Toeplitz matrices represent linear time-invariant filters in signal processing.
Trace of A Sum of diagonal entries = sum of eigenval- uesofA.TrAB=TrBA.
Transpose matrix AT Entries A = Aji. AT is n by m, ATA is square, symmetric, positive semidefinite. The transposes of AB and A-1 are BTAT and (AT)-1.
Triangle inequality I lu + v II < Iju ll + (I v j j For
matrix norms, II A + B II < II A II + II B II
Tridiagonal matrix T ti j= 0 if Ii - j I> 1. T-1 has rank 1 above and below diagonal.
Unitary matrix UH = UT = U-1 Orthonormal columns (complex analog of Q).
Vandermonde matrix V V c = b gives the polynomial p(x) = co+ +cn_ix7z-1 with p(xi) = bi atnpoints. Vij = (xi)j-anddetV =product of(xk-xi)fork > i.
Vector addition v + w = (vi + wi, … , vn + wn) _ diagonal of parallelogram.
Vector space V Set of vectors such that all combina- tions cv + dw remain in V. Eight required rules are given
in Section 2.1 for cv + dw.
Vector v in RI Sequence of n real numbers v = ( V i ,—, vn) = point in Rn.
Volume of box The rows (or columns) of A generate a box with volume I det (A) I.
Wavelets w jk(t) or vectors w jk Rescale and shift the time axis to create wjk(t) = woo(2Jt – k). Vectors from woo = (1, 1, -1, -1) would be (1, -1, 0, 0) and (0, 0, 1, -1).
1 block elimination on [cA DJ .
Schwarz inequality Iv – WI < I l v I I I I w II < (vTAv)(wTAw) if A = CTC.
Then I vTA w I2
Semidefinite matrix A (Positive) semidefinite means symmetric with xTAx > 0 for all vectors x. Then all eigenvalues . > 0; no negative pivots.
Similar matrices A and B tB M-1 AM has the same eigenvalues as ,1.
Simplex method for linear programming The min- imum cost vector x* is found by moving from corner to lower-cost corner along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Singular matrix A A square matrix that has no inverse: det (A) = 0.
Singular Value Decomposition (SVD) A = UE VT = (orthogonal U) times (diagonal E) times (orthogonal VT) First r columns of U and V are orthonormal bases of C(A) and C(AT), with Avi = oiui and singular value of > 0. Last columns of U and V are orthonormal bases
of the nullspaces of AT and A.
Skew-symmetric matrix K The transpose is -K, since Ki j = -K1. Eigenvalues are pure imaginary, eigenvec- tors are orthogonal, eK, is an orthogonal matrix.
Solvable system Ax = b The right side b is in the col- umn space of A.
Spanning set vl, … , v,n for V Every vector in V is a combination of vj,… , V.
Special solutions to As = 0 One free variable is si = 1, other free variables = 0.
Spectral theorem A = QA QT Real symmetric A has real Xj and orthonormal qj, with Aqi = In mechan- ics, the qi give the principal axes.
Spectrum of A The set of eigenvalues {XI, Xn }.
Spectral radius = I Amax 1
Standard basis for Rn Columns of n by n identity matrix (written i, j, k in R3).
Stiffness matrix K When x gives the movements of the
C03
Cam,
afro
CAD
10`’
‘by
DO’
.1.
MATLAB leaching Codes
cofactor cramer deter eigen2 eigshow eigval eigvec elim findpiv fourbase grams house inverse leftnull linefit
lsq normal nulbasis orthcomp partic plot2d
plu poly2str project projmat randperm rowbasis samespan signperm slu
sly
splu
sply symmeig tridiag
Compute the n by n matrix of cofactors.
Solve the system Ax = b by Cramer’s Rule.
Matrix determinant computed from the pivots in PA = LU. Eigenvalues, eigenvectors, and det (A – Al) for 2 by 2 matrices. Graphical demonstration of eigenvalues and singular values. Eigenvalues and their multiplicity as roots of det (A – XI) = 0. Compute as many linearly independent eigenvectors as possible. Reduction of A to row echelon form R by an invertible E.
Find a pivot for Gaussian elimination (used by plu).
Construct bases for all four fundamental subspaces. Gram-Schmidt orthogonalization of the columns of A.
2 by 12 matrix giving corner coordinates of a house.
Matrix inverse (if it exists) by Gauss-Jordan elimination. Compute a basis for the left nullspace.
Plot the least squares fit to in given points by a line.
Least-squares solution to Ax = b from ATAX = ATb.
Eigenvalues and orthonormal eigenvectors when ATA = AAT. Matrix of special solutions to Ax = 0 (basis for nullspace).
Find a basis for the orthogonal complement of a subspace. Particular solution of Ax = b, with all free variables zero. Two-dimensional plot for the house figures.
Rectangular PA = LU factorization with row exchanges.
Express a polynomial as a string.
Project a vector b onto the column space of A.
Construct the projection matrix onto the column space of A. Construct a random permutation.
Compute a basis for the row space from the pivot rows of R.
Test whether two matrices have the same column space. Determinant of the permutation matrix with rows ordered by p. LU factorization of a square matrix using no row exchanges. Apply slu to solve the system Ax = b allowing no row exchanges. Square PA = LU factorization with row exchanges.
The solution to a square, invertible system Ax = b.
Compute the eigenvalues and eigenvectors of a symmetric matrix. Construct a tridiagonal matrix with constant diagonals a, b, c.
These Teaching Codes are directly available from the Linear Algebra Home Page: http://web.mit.edu/18.06/www.
They were written in MATLAB, and translated into Maple and Mathematica.
‘.Y
p’+
…
…
CO] ‘C7
°0)
min
4-i
A = LDLT, 51, 60, 319-320, 325, 474, 480
A = LDU, 36, 51, 224, 369, 474 A = LU, 34-35
A = MJM-I, 300, 474
A = QAQT, 285,288,297-298,
320-323,474,480
A = QR, 174,179,181-182,351,
363, 474, 477
A=QS,333
A = UEVT, 306,331-333,336,
474, 480
A = SAS-‘, 245, 250, 255, 257,
267, 300, 474
AAT 46,108,162,222-223,306,
331-336,357,475
ATA, 45, 108-109, 114, 161-168,
179,182,184,306,331-335, 341,356-357,363,475, 481, 488
ATCA, 120-124, 480
C’,248,273,280,282,288,292
eA, 266-279
PA = LU, 38-39
QAQT, 320-323, 327
RRT and RTR, 51-52
R-,69,72-73,288
S-IAS, 132, 245-248, 285, 293,
299, 301, 324, 477
A
A = LU, 34, 35
Abel, Niels Henrik, 239 Addition of vectors, 6 Applications of determinants,
201, 220-229
Applied Mathematics and Scientific Computing, 122,
320-321, 349
Arbitrary constants, 59, 115
Area, 137, 223-229, 349,
454-455,477
Arithmetical operations, 14, 15 Arnoldi, 374
Associative law, 23, 29, 34, 46,
134, 445, 476
B
Back-substitution, 12, 36
Band matrix, 59, 61, 401 Bandwidth, 61, 371-372
Basis, 95, 141
Bellman equation, 406 Bidiagonal matrix, 61, 363, 364 Bit-reversed order, 196
Block elimination, 120, 219, 480 Block multiplication, 224 Boolean algebra, 204
Boundary condition, 59, 64,
347, 350
Breadth-first search, 406 Breakdown, 7, 13, 16, 18, 49 Bringing Down the House,
377, 412
Buniakowsky, 155
C
C’,248,280,282,288,292 Calculation of A-‘, 46-47
California, 257-258, 381 Capacity, 119, 401-406, 472
Capacity of edge, 119 Cartesian product, 417 Cauchy-Buniakowsky-
Schwarz, 155 Cayley-Hamilton, 253, 304, 427,
456-457, 476
CD = -DC, 27, 206, 231, 302 Change of basis, 132, 136,
294-295,302,476
Change of variables, 293, 390,426 Characteristic polynomials, 235 Checkerboard, 139, 216, 242 Chemist, 156, 203, 273
Cholesky decomposition, 320
Circulant matrix, 189, 197, 291-292,476
Cofactor matrix, 218, 221, 226, 454
Cofactors, 213
Column at a time, 21, 26, 46, 129,
331, 423
Column picture, 7, 8
Column rule, 21
Column space, 71, 72, 104, 107 Column vectors, 6-7, 20 Columns times rows, 30, 285,
333, 478
Combination of columns, 6-7,
71-72,92,478
Combination of rows, 429 Commutative, 23, 25, 69 Companion matrix, 242, 456, 476 Complement, 145-152 Complementary slackness,
394, 409
Complete matching, 403 Complete pivoting, 63 Complete the square, 313,
316-317,345
Complex conjugates, 281 Complex matrices, 280-292 Complex numbers, 189 Composition, 131 Compound interest, 254, 259 Condition numbers, 332 Conductance, 119 Cone,399-400
Congruence, 324, 326 Conjugate gradient method,
372, 390
Conjugate transpose, 283 Connectivity matrix, 115 Constraint, 82, 85, 340-344, 346,
378-387,390,392,394-402, 406,470-471,480
Consumption matrix, 260 Continuous compounding, 254
Arithmetic mean, 154, 447
dam'”
can
.-+
100
VIM
arc
Old
.?r
AFC
Nam
,-1
>o.
Ca. ..a
cc!
.-+
0000000000 000
dam’
Map
oar
‘-O
r”‘
got
_p”
Oho
mar”
t].
Continuous Markov process, 273 Convergence, 368
Convolution rule, 189
Cooley, 194
Coordinate, 6, 69-70, 201, 229, 282
Corner of feasible set, 381-391, 395-397
Cosine, 102,152-159,182-184, 188-191,198,272,274
Cost function, 378
Cost of elimination, 14, 15 Cost vector, 382
Covariance matrix, 169-172,
449, 476
Cramer’s Rule, 202, 221-222 Cross-product matrix, 177 Crout algorithm, 36 . Current law, 106, 116-117,
120-122,401-402,478 Curse of dimensionality, 371
D
Dantzig, George Bernard, 382 Decomposition, 32, 148, 298,
331-338,357,363,475,
479-480
Defective, 268, 293, 299 Degeneracy, 385, 395 Dependent, 9-11, 80-82, 92-111,
116-117,259,282,333-335 Defective matrix, 238, 246, 268,
293, 299 Depth-first search, 406 Determinants
formulas, 201, 210-219 properties, 203-209 “ratio of determinants,” 1,
202, 224
Diagonal matrix, 36, 46, 204-206,
238,245,267,322,327-335, 390, 415, 422
Diagonalizable, 238, 246, 249-253,270,290,296-303, 306-308,427,457,473, 476-477
Diagonalization of matrices, 245 Jordan form, 422-427 similarity transformations, 301 simultaneous, 326
Diagonally dominant, 373 Diet problem, 380
Difference equation, 59, 64, 193, 238,250,254-270,273-275, 293, 348, 359, 367, 419
Differential equations
change to matrix equations, 59 diffusion, 268
and e”, 266-279
Fourier analysis, 122 instability, 270, 271, 273 Laplace’s partial differential
equation, 418
second-order equations, 274 similarity transformations, 293 stability, 270, 273 superposition, 237
Differentiation matrix, 128-129 Diffusion, 268
Dimension, 69-73, 81-96,
104-106,147,181-183, 314-315, 416
of column space, 98 of subspace, 81
of vector space, 96
Directed graph, 104, 114 Discrete Fourier Series, 192 Discrete Fourier Transform, 188,
189, 287
Distance, 152, 155-157, 161,
165-166, 173
Distinct eigenvalues, 245-247,
297-299,303,308,
427, 473
Distributive law, 445, 477
Dot product. See Inner product Double eigenvalues, 246, 422 Dual problem, 382-391 Dynamic programming, 406
E
eAt, 266-279
Echelon form U, 77-78 Economics, 58, 153, 260-263,
265, 379, 396, 399 Eigenfunction, 270, 346, 349 Eigenvalue matrix, 245, 247, 251, 292, 325, 331,
474-475, 477 Edge-node incidence
matrices, 104 Eigenvalues and eigenvectors,
233-309
characteristic polynomial, 235
complex matrices, 280-292 computation, 359-366 diagonalization, 245-254 double eigenvalues, 246, 422 eigenvalue test, 320 instability, 234, 259,
270-273
Jordan Form, 300, 422 Ak,255-265
positive definite matrix, 311 similarity, 293-306
Eigenvector matrix, 245, 247, 249,251,253,291-293,296,
331-332,419,477-478 eigshow, 240
Einstein, Albert, 21 Elementary matrix, 22, 32, 49 Elimination, 1, 9
Elimination matrix, 2, 22 Ellipses and ellipsoids
eigshow, 240
Hilbert space, 182, 183 Khachian’s method, 389 positive definite matrices, 322 principal axis theorem, 285
Energy, 272-275, 287, 334, 339-340,347-350
Entering variable, 387 Equality constraints, 383 Equilibrium, 120, 122, 261,
344, 472
Error vector, 161-162, 165-167,
170-172 Euclidean space, 183
Euler’s formula, 117, 191
Even permutation, 44, 217, 230,
436,453
Existence, 61, 69, 107-109, 410 Experiment, 19, 67, 153,
165-167
Existence and uniqueness, 69 Exponentials, 266-279
F
Factorization, 36, 213 Fourier matrix, 474
Gram-Schmidt, 363 Land U, 3
overrelaxation factor, 369 polar, 333
symmetric, 51 triangular, 32-44
Index 483
d00
00i0
NO,
.~.
BOO
U,_,,
F”.
‘-a
NCO }.U
OHO
..O
c00000
.`n
try
NONS’ }-t
27,
lei
P’+
COD
484 Index
Fast Fourier Transform (FFT),
188-197,287,372,419, 475, 477
fundamental identity, 287
orthogonality, 188-197 Feasible set, 378, 382 FFT. See Fast Fourier
Transform (FFT) Fibonacci numbers, 238, 255,
256, 259
Filippov, A. F., 423
Filtering, 189
Finite difference, 61, 64, 270, 346,
348, 354, 370, 418
Finite element method, 321, 346 First pivot, 12
Five-point matrix, 372 Five-point scheme, 371
Fix, George, 349
Football, 118-119, 124, 322
Formulas
determinants, 201, 210-219 Euler’s, 117, 191
product of pivots, 47, 202 Pythagorean, 142
Forward elimination, 32, 36 Fourier matrix, 188, 190-192,
195-197,287,291,309,419,
421,461,475
Fourier series, 182
Fredholm, 149
Free variable, 80-85, 89-91, 94,
104-109,124,384-387,396,
437-441,477-481
Freund, Robert, 398
Frobenius, 261-262, 479
Front wall, 144, 382
Full rank, 103, 109
Function spaces, 183 Fundamental subspaces, 102-113 Fundamental Theorem of Linear
geometry of linear equations,
3-10
matrix notation, 19-31 orthogonality, 160, 184 singular cases, 7-11, 13
Generalized eigenvectors, 268 Geometry of planes, 2 Gershgorin, 373-374
Givens, 302
Global minimum, 312
Golub, Gene Howard, 372 Gram-Schmidt process, 174-187 Graphs and networks, 114-124,
401-407
Greedy algorithm, 405
Group, 58, 66-67, 80, 213, 330, 351, 402, 436, 465
H
Half bandwidth, 61 Halfspace, 377
Hall’s condition, 404 Heat equation, 270 Heisenberg uncertainty
principle, 250 Hermitian matrix
eigenvalues and eigenvectors,
280,283-286,288,
297, 298
positive definiteness, 334 Hessenberg matrix, 361, 365 Hilbert matrix, 184
Hilbert space, 182-183 Homogeneous, 20, 92, 149, 237,
439, 447
Homogeneous equation, 73 Householder matrix, 361-365
I
IBM, 15
Identity matrix, 22
Identity transformation, 294 Ill-conditioned, 62-64, 184,
352-353, 436
Incidence matrices, 104, 118, 401 Incomplete LU, 372
Inconsistent, 8
Incurable systems, 13
Indefinite, 312-314, 322-323,
327-330,464,478,480
Independence, 92-105, 143, 164,
Inequalities, 377-381
Inertia, law of, 324 Infinite-dimensional, 69, 347 Infinity of solutions, 3, 8, 9 Initial-value problem, 233 Inner product, 20, 143, 169 Inner product of functions, 183 Input-output matrix, 260 Instability
difference equations, 270,
271,273
eigenvalues and eigenvectors,
234,259,270-273
Fibonacci equation, 259
roundoff errors, 63 Integration, 127, 183
Integration matrix, 129
Interior point method, 377, 390 Intersection of spaces, 415-421 Intersection point, 4, 5 Intertwining, 343-344 Introduction to Applied
Mathematics, 122,
320, 349
Invariant, 324, 480 Inverse, 45-48
formula for A-1, 52, 221 of product, 34
of transpose, 38, 45-58
Inverse matrix, 45-48
Inverse power method, 360 Invertible = nonsingular, 48, 49 Iterative method, 367-372
J
Jacobian determinant, 201 Jacobi’s method, 361, 368,
369, 371
Jordan Form, 300, 422-427
K
Karmarkar’s method, 390
Kernel, 104, 135, 445
Khachian’s method, 389 Kirchhoff’s Current Law, 106,
116, 117, 120, 402
Kirchhoff’s Voltage Law, 115,
120, 146
Kronecker product, 418
Krylov sequence, 365
Kuhn-Tucker conditions,
G
Algebra, 106, 116-117, 141,
146-147,335,398
Galois, Evariste, 239
Game theory, 377-414 Gauss-Jordan, 47-49 Gauss-Seidel method, 368-371 Gaussian elimination, 1-68
A = LU and PA = LU, 34-35,
38-39
330, 425
394, 397
“ol
i-4
M-‘ -mot
p.,
mod”
`””‘
001
000 0000
00N ,–.
.fl
‘;!
r-+
“‘g
`n’
e-‘
t-+
(DC
CAD
(71
411
vim
>!’
wit
J,,
(-A
F-1
r–3
CND
A?,
O.”‘
‘T1
L
Lagrange multipliers, 340, 396 Lanczos, 374-375
LAPACK (Linear Algebra
PACKage), 351
Laplace equation, 418
Las Vegas, 377, 412
Law of cosines, 152-159
Law of inertia, 324
LDLT factorization, 5153, 60 LDU factorization, 36-37, 41-43,
51-53,60-63 Least-squares, 119, 153, 160-173, 177
Leaving variable, 387 Left-inverse, 45, 177
Left nullspace, 107
Legendre polynomial, 182, 185 Length, 119, 404
Leontief, 260
Linear combination, 6-7 Linear independence, 82-102 Linear programming, 377-414
constraints, 378-380
dual problem, 382-391 game theory, 408-413 linear inequalities, 377-381 network models, 401-407 simplex method, 382-391 tableau, 386-388
Linear transformation, 125-137 Linearly dependent, 92
Local minimum, 312
Loop, 114-124, 146, 374, 405,
444, 477-478
Lower triangular matrix, 33, 71 LU factorization, 33-44 Lyapunov, Aleksandr, 272
M
Markov matrix, 244, 257-258, 261, 273, 360, 478
Markov process, 238, 257-259 Marriage problem, 403-405 Mass matrix, 321-325, 350, 406 Matching, 403-407, 472, 476 Mathematics for the Millions, 222 MATLAB, 211, 239, 285
Matrix
adjacency, 124, 476
band, 59, 61, 401 checkerboard, 139, 216, 242
circulant, 189 coefficient, 3, 5, 19-22,
59-60
cofactor, 213-222 companion, 476 consumption, 260 covariance, 169-172 cross-product, 177 defective, 238, 246, 268,
293, 299
diagonal, 36
diagonalizable, 246, 249 difference, 59, 115-119, 221 echelon, 77
elementary, 22, 32, 49 elimination, 22, 32 exponential, 234-237, 256,
266-274,301,306,477 finite difference, 61, 64, 270,
346-348,370,418 Fourier, 176, 182-184,
188-195,287,419,477 Hermitian, 280 Hessenberg, 361, 365 Hilbert, 184
identity, 22
ill-conditioned, 62-64, 184,
352-353, 436
incidence, 104, 118, 401 indefinite, 312-314, 327-330 inverse, 45-48
invertible, 48-49
Jordan, 300-422
lower triangular, 33, 71 Markov, 258, 261, 273, 360 multiplication, 19-31 nilpotent, 309, 479 nondiagonalizable, 238, 246,
268, 293, 299 nonnegative, 257-262,
378-382,398-399 nonsingular, 9, 13
norm and condition number,
352-358 normal, 162-170 notation, 2-3, 9, 19
orthogonal, 175
payoff, 408-413 permutation, 203, 224, 403 positive, 60, 261
positive definite, 311-330 projection, 25, 164, 238
rank one, 109-110, 156, 306, 479
rectangular, 20, 109, 114, 129, 177
reflection, 125
rotation, 125, 131, 247, 365 semidefinite, 314, 321-322,
333, 480 similar, 293
singular, 38, 204 skew-Hermitian, 288, 298 skew-symmetric, 410 square root, 142, 181,
189-193, 223 symmetric, 50-58
transition, 258 transpose, 3, 45-51 triangular, 35-36 tridiagonal, 60 unitary, 286, 298, 331
Max flow-min cut theorem, 402 Maximal independent set, 97 Maximin principle, 344, 409, 411 Maximizing minimum, 410 Mean, 178, 179
Minima and maxima, 311-317 Minimal cut, 402
Minimal spanning set, 97 Minimax theorem, 344, 393,
409,411
Minimum point, 311 Minimum principles, 339-345 Minors, 213
MIT, 118-119
Mixed strategy, 408 Multiplication of matrices,
20-21, 131
Multiplicity, 246, 301, 478, 481 Mutually orthogonal, 143
N
Natural boundary condition, 59,
64, 347, 350 Negative definite, 314 Network, 114-119, 124,
401-407,478 Neutrally stable, 259 New York Times, 119
Newton’s Law, 273 Nilpotent matrix, 309, 479 No solution, 2, 3, 7, 8 Nodes, 104, 114-117
Index 485
x.,
NP-, `r)
s.. t-1
.i.
\°o
‘-‘
“C3
6M1
V)-
b-0
v,^
.-a
,-.
‘C3
.¢’
‘-‘
spa
O+.
.>~
‘J,
‘i.
‘CS
‘t3
©b00
>~M
,r,
‘-+
try Z.” (+j
‘t7
>~+
r.;
–J
\40
r-,
-p-
rye
CDJ
o-+
‘_’
g’s
000
‘.7
‘-s
i-+
c,0
o00
-NP
G’.
NON
God
2’6
.n.
486 Index
Nondiagonalizable matrices, 238,
246, 268, 293, 299
Nonlinear least squares, 168 Nonnegativity, 378, 383, 398 Nonsingular matrix, 9, 13 Nonzero eigenvalues, 49 Nonzero pivots, 48
Norm of a matrix, 352
Normal equations, 162
Normal matrix, 298, 303, 357, 479 Normal mode, 238, 275, 330 Nullity, 104-106, 127, 416 Nullspace, 71, 73, 107, 144 Number of basis vectors, 96 Number of elimination steps, 3,
4, 239
0
Odd permutation, 226-227 Ohm’s Law, 118-122 Optimality, 378, 386, 394 Orthogonal, 141-200
basis, 141
complement, 145-146 eigenvalues, 272 matrix, 175 projection, 152-159 SVD, 148
unit vectors, 141
vectors and subspaces, 141-151 See also Gram-Schmidt process
Orthogonal subspace, 143-149,
151-152,415,479
Orthogonalization, 174, 182, 187,
331, 375, 477, 481
Orthonormal, 141-143, 148,
174 -188
Oscillation, 234, 270, 274-275 Overdetermined, 153, 166
Overdetermined system, 153, 166 Overrelaxation (SOR), 368-371
P
PA=LU,38,39
Pancake, 152
Parallel planes, 7, 8 Parallelogram, 4
Parentheses, 6, 21-24, 34, 45-49,
134, 213, 332, 434, 445, 476
Partial differential equation,
371, 418
Partial pivoting, 63, 352
Particular solutions, 82, 83 Payoff matrix, 408-409, 413 Peanut butter, 380, 392-393 Permutation, 37-45, 202-203,
211-218
Permutation matrix, 203, 224, 403 Perpendicular. See Orthogonal Perron-Frobenius, 261, 262 Perturbation, 62, 353, 357 Piecewise polynomial, 347-348 Pivots, 311
pivot formulas, 202 positive, 318
test, 47-49
variables, 80-81, 384
Plane rotation, 361, 365, 367 Planes, 4-5
Poker, 377, 412-414
Polar coordinates, 282, 289, 333 Polar factorization, 333 Polynomial, 389, 478, 480, 481 Positive definite matrix, 311-350
minima, 311-3 ‘7`
minimum principles, 339-345 semidefinite, 314, 321
tests for positive definiteness,
318-330
Positive matrix, 60, 261
_Potitive’setnidefinite, 314, 321 Potential, 3 9, 349; 478
Po°lentials at nodes, 115
Powers of matrices, 255 Preconditioner, 368
Primal problem, 392
Principal axes, 334
Principal submatrix, 87
Prisoner’s dilemma,412
Product. See Matrix multiplication Projection, 322, 328, 338,
390-392,416,447-448, 450, 461, 465, 467, 475, 479, 481
Projection matrix, 125
Projection onto line, 152-159 Pseudoinverse, 108, 148, 161, 335 Pure exponential, 426
Pythagoras Law, 141, 154,
177, 335 Q
QA QT, 320-323, 327
QR algorithm, 351, 359, 364-365
QR factorization. See Gram-Schmidt process
Quantum mechanics, 249
R
Rn, 69, 72-73, 288
Range as column space, 92
Rank of matrix, 83, 98, 104
Rank one, 87, 107-114, 138, 140,
329,333,337,417-418,
438-439,464,473,479,480
Ratios of determinants, 1,
222, 224
Rayleigh’s principle, 342 Reduced costs, 386, 396 Reflection matrix, 125 Regression analysis, 153 Repeated eigenvalues, 246 Rescaling, 390
Revised simplex method, 389 Right-handed axes, 175, 223 Right-inverse, 338, 466 Roots of unity, 189-190 Rotation matrix, 125, 131,
247, 365
Roundoff errors, 61-63, 333, 352,
355-356,359,479
Row at a time, 372
Row exchanges, 32-44
Row picture, 428, 480
Row rank = column rank, 105 Row Reduced Form R, 77-78 Row space, 102-110, 116-117,
144-148,331
Row times column, 20 RRT and RTR, 51, 52
S
5-‘AS, 132, 245-248, 285, 293, 299, 301, 324, 477
Saddle point, 311-317, 408
Scalar, 6, 19-71, 73-75, 126, 143, 234, 278, 282, 339, 415, 478
Schur complement, 31, 219, 431,
475, 480
Schur’s lemma, 296
Schwarz inequality, 154-155,
183, 250
Semidefinite, 314, 321, 327, 329,
333-3349467,475,480,488
Sensitivity analysis, 396 Separating hyperplane, 398-399
zzzzzzzzzzzzz
0
0 0000
V’-
..0
..+
°0I
o.’~
.F..,
.fir
M00
,-,
‘x.
r.+
c;,
00-
ENO
Pooh
e-.
CND
D.’
CAD
.7. (-A ‘,O
‘n’
0000 0 0
0000
.:A
z.,
>44
Iy.
w10
“C5
CAD
YAP
i–+
7.’
Shadow price, 393, 396 Shearing,132-133
Shifted inverse power method, 360 Shortest path problem, 404 Shortest spanning tree, 405
Sigma notation, 21
Signs of eigenvalues, 271, 308,
311,314,318,324-326,329,
346,478
Similar matrix, 294, 296, 306, 324, 361, 422, 480
Similarity transformation,
293-306
Simplex method, 377, 379,
382-391
Simultaneous diagonalization, 326 Singular cases, 3, 7-11, 13 Singular matrix, 38, 204
Singular Value Decomposition
(SVD), 331-337
Skew-Hermitian matrices,
288, 298
Skew-symmetric matrices, 410 Slack variable, 379, 383
SOR (successive overrelaxation),
368,369
Space. See Vector space
Spanning a space, 94 Spanning tree, 117
Sparse matrix, 59, 348 Special solutions, 80, 81, 104 Spectral radius, 351
Spectral theorem, 285
Square root matrix, 193, 320, 332,
334, 336
Stable matrix, 290, 332
Stability, 270, 273
Staircase patterns, 78
Standard basis, 174
Statistics, 122, 153, 162, 172, 325 Steady state, 257-259, 261,
263-264,273,275,306,309,
360, 478
Steepest descent, 390 Stiffness matrix, 119, 348
Stopping test, 386, 388, 391, 471 Stretching matrix, 125
String of eigenvectors, 423, 427 Submatrix, 44, 78, 148, 196,
213,223,224,296,318-319,
346, 404, 407, 433, 438,
450, 472 Subspace, 70, 98
fundamental, 102-115, 123,
137-139,187,392,477,481
orthogonal, 114, 141-200, 399,
Two-point boundary-value problem, 59
U
Uncertainty principle, 250 Underdetermined, 161 Uniqueness, 69
446, 477, 479, 481
Successive overrelaxation (SOR), Unit circle, 190, 282, 298
368, 369
Sum, 7-8, 21, 70-73, 82, 115,
126-127,176-178,181-184
Sum of spaces, 415-421
Sum of squares, 142, 160, 166,
177,182,199,318-323, 327, 464
Summation, 142, 160, 168, 177,
182,199,318-320,327
Sunflower, 255 Superposition, 237 SVD. See singular value
decomposition Sylvester law of inertia, 324 Symmetric matrix, 50-58
eigenvalues and eigenvectors,
280, 286, 298
QAQT, 320-323, 327 symmetric LDLT , 51
T
Tableau, 386-388
Taylor series, 315
Tensor, 2, 418
Tests for positive definiteness,
318-330
Tetrahedron, 158, 471 Thorp, 412
Topology matrix, 115 Trace, 239-241, 243-244,
250-253
Transformation, 126-129,
131-136
Transition matrix, 258-259,
263-264, 458
Transportation problem, 381, 406 Transpose matrix, 3, 45-51
Tree, 117, 123-124, 255, 405, 407
Triangle inequality, 157, 262,
358, 480
Tukey, John, 194 Two-person game, 408
Unit vector, 143, 174
Unitary matrix, 286, 298, 331 Upper triangular matrix, 32, 181
V
Value of game, 409
Vandermonde matrix, 109 Variances and weighted least
squares, 169 Vector, 2-9, 19-24
Vector spaces, 69-140 fundamental subspaces,
102-113
linear transformation, 125-137 orthogonality, 141
product, sum, and intersection,
415-421 subspaces, 102-113
Vertical distances, 166 Voltage law. See Kirchhoff Volume, 201
von Neumann’s model, 262 von Neumann’s theory, 412
W
Wave equation, 275 Weak duality, 393 Weighted average, 169 Wilkinson, 355 Wronskian, 268
Z
Zero determinant, 204
Zero eigenvalue, 109, 236, 238,
241-243,246,488
Zero in pivot position, 13, 28, 33,
37-38,42,48-49,78-84,89, 105, 202, 474
Zero matrix, 300 Zero-sum game, 409 Zero vector, 69
Index 487
-fl
,.O .ti
-4=
oar
.-.
‘a” .–r
s.+
‘-e
ANN.-,
.’k
0000
.fl
+U’
EKE
`°’
‘-.
‘-‘ -‘f ,..
“‘-
r+’
CND
“c3
-AO
P’+
acs
1:$
‘t7
r-=
CT’
!].
41-~J
CAD
‘fi’t
.”7
\,O r-+
f-+
,~p
r-,
NN”
‘–`
ONO
`w°
linear Algebra in a Nutshell
Aisnbyn)
Nonsingular
A is invertible.
The columns are independent.
The rows are independent.
The determinant is not zero.
Ax = 0 has one solution x = 0.
Ax = b has one solution x = A-1b.
A has n (nonzero) pivots.
A has full rank r = n.
The reduced row echelon form is R = I. The column space is all of R.
The row space is all of R’.
All eigenvalues are nonzero.
ATA is symmetric positive definite.
A has n (positive) singular values.
Singular
A is not invertible.
The columns are dependent.
The rows are dependent.
The determinant is zero.
Ax = 0 has infinitely many solutions.
Ax = b has no solution or infinitely many. A has r < n pivots.
A has rank r < n.
R has at least one zero row.
The column space has dimension r < n. The row space has dimension r < n. Zero is an eigenvalue of A.
ATA is only semidefinite.
A has r < n singular values. Each line of the singular column can be made quantitative using r.
cm'
CAD