CS计算机代考程序代写 chain Bayesian AI algorithm The Matrix Cookbook

The Matrix Cookbook
[ http://matrixcookbook.com ]
Kaare Brandt Petersen Michael Syskind Pedersen
Version: November 15, 2012
1

Introduction
What is this? These pages are a collection of facts (identities, approxima- tions, inequalities, relations, …) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference .
Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books – see the references for a full list.
Errors: Very likely there are errors, typos, and mistakes for which we apolo- gize and would be grateful to receive corrections at cookbook@2302.dk.
Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header.
Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome acookbook@2302.dk.
Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, differentiate a matrix.
Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Brian Templeton, Christian Rishøj, Christian Schr ̈oppel, Dan Boley, Douglas L. Theobald, Esben Hoegh-Rasmussen, Evripidis Karseras, Georg Martius, Glynne Casteel, Jan Larsen, Jun Bin Gao, Ju ̈rgen Struckmeier, Kamil Dedecius, Karim T. Abou-Moustafa, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Markus Froeb, Michael Hubatka, Miguel Bar ̃ao, Ole Winther, Pavel Sakov, Stephan Hattinger, Troels Pedersen, Vasile Sima, Vincent Rabaud, Zhaoshui He. We would also like thank The Oticon Foundation for funding our PhD studies.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 2

CONTENTS CONTENTS
Contents
1 Basics
6
1.1 Trace……………………………. 6 1.2 Determinant………………………… 6 1.3 TheSpecialCase2x2……………………. 7
2 Derivatives 8
2.1 DerivativesofaDeterminant ……………….. 8
2.2 DerivativesofanInverse………………….. 9
2.3 DerivativesofEigenvalues …………………. 10
2.4 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 10
2.5 DerivativesofTraces……………………. 12
2.6 Derivativesofvectornorms ………………… 14
2.7 Derivativesofmatrixnorms………………… 14
2.8 DerivativesofStructuredMatrices …………….. 14
3 Inverses 17
3.1 Basic……………………………. 17 3.2 ExactRelations………………………. 18 3.3 ImplicationonInverses…………………… 20 3.4 Approximations………………………. 20 3.5 GeneralizedInverse…………………….. 21 3.6 PseudoInverse ………………………. 21
4 Complex Matrices 24
4.1 ComplexDerivatives ……………………. 24 4.2 Higherorderandnon-linearderivatives. . . . . . . . . . . . . . . 26 4.3 Inverseofcomplexsum ………………….. 27
5 Solutions and Decompositions 28
5.1 Solutionstolinearequations………………… 28 5.2 EigenvaluesandEigenvectors ……………….. 30 5.3 SingularValueDecomposition……………….. 31 5.4 TriangularDecomposition …………………. 32 5.5 LUdecomposition …………………….. 32 5.6 LDMdecomposition ……………………. 33 5.7 LDLdecompositions ……………………. 33
6 Statistics and Probability 34
6.1 DefinitionofMoments …………………… 34 6.2 ExpectationofLinearCombinations ……………. 35 6.3 WeightedScalarVariable …………………. 36
7 Multivariate Distributions 37
7.1 Cauchy ………………………….. 37 7.2 Dirichlet………………………….. 37 7.3 Normal ………………………….. 37 7.4 Normal-InverseGamma ………………….. 37 7.5 Gaussian………………………….. 37 7.6 Multinomial………………………… 37
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 3

CONTENTS CONTENTS
7.7 Student’st ………………………… 37 7.8 Wishart………………………….. 38 7.9 Wishart,Inverse ……………………… 39
8 Gaussians 40
8.1 Basics …………………………… 40 8.2 Moments …………………………. 42 8.3 Miscellaneous……………………….. 44 8.4 MixtureofGaussians……………………. 44
9 Special Matrices 46
9.1 Blockmatrices ………………………. 46
9.2 DiscreteFourierTransformMatrix,The . . . . . . . . . . . . . . 47
9.3 HermitianMatricesandskew-Hermitian . . . . . . . . . . . . . . 48
9.4 IdempotentMatrices……………………. 49
9.5 Orthogonalmatrices ……………………. 49
9.6 Positive Definite and Semi-definite Matrices . . . . . . . . . . . . 50
9.7 SingleentryMatrix,The ………………….. 52
9.8 Symmetric, Skew-symmetric/Antisymmetric . . . . . . . . . . . . 54
9.9 ToeplitzMatrices……………………… 54
9.10Transitionmatrices…………………….. 55 9.11Units,PermutationandShift ……………….. 56 9.12VandermondeMatrices…………………… 57
10 Functions and Operators 58
10.1FunctionsandSeries ……………………. 58 10.2KroneckerandVecOperator ……………….. 59 10.3VectorNorms……………………….. 61 10.4MatrixNorms……………………….. 61 10.5Rank……………………………. 62 10.6 IntegralInvolvingDiracDeltaFunctions . . . . . . . . . . . . . . 62 10.7Miscellaneous……………………….. 63
A One-dimensional Results 64
A.1 Gaussian………………………….. 64 A.2 OneDimensionalMixtureofGaussians. . . . . . . . . . . . . . . 65
B Proofs and Details 66
B.1 MiscProofs………………………… 66
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 4

CONTENTS
Notation and Nomenclature
A Matrix
Aij Matrix indexed for some purpose
Ai Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or
The n.th power of a square matrix A−1 The inverse matrix of the matrix A
CONTENTS
A+ The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not elementwise
(A)ij The (i,j).th entry of the matrix A Aij The (i, j ).th entry of the matrix A
[A]ij The ij-submatrix, i.e. A with i.th row and j.th column deleted a Vector (column-vector)
ai Vector indexed for some purpose ai The i.th element of the vector a
a Scalar
Rz Real part of a scalar
Rz Real part of a vector
RZ Real part of a matrix
Iz Imaginary part of a scalar Iz Imaginary part of a vector IZ Imaginary part of a matrix
det(A) Determinant of A Tr(A) Trace of the matrix A
diag(A) Diagonal matrix of the matrix A, i.e. (diag(A))ij = δij Aij eig(A) Eigenvalues of the matrix A
vec(A) The vector-version of the matrix A (see Sec. 10.2.2)
sup Supremum of a set
||A|| Matrix norm (subscript if any denotes what norm)
AT Transposed matrix
A−T The inverse of the transposed and vice versa, A−T = (A−1)T = (AT )−1.
A∗ Complex conjugated matrix
AH Transposed and complex conjugated matrix (Hermitian)
A◦B Hadamard (elementwise) product A⊗B Kronecker product
0 The null matrix. Zero in all entries.
I The identity matrix
Jij The single-entry matrix, 1 at (i, j ) and zero elsewhere
Σ A positive definite matrix Λ A diagonal matrix
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 5

1 Basics
1 BASICS
= B−1 A−1 (1)
= …C−1 B−1 A−1 (2)
= (A−1 )T (3)
= AT+BT (4)
= BT AT (5)
= …CT BT AT (6)
= (A−1 )H (7)
= AH+BH (8)
= BH AH (9)
= …CH BH AH (10)
1.1 Trace
(AB)−1 (ABC…)−1 (AT )−1
(A+B)T (AB)T (ABC…)T
(AH )−1 (A+B)H (AB)H (ABC…)H
Tr(A) Tr(A) Tr(A)
Tr(AB) Tr(A + B) Tr(ABC) aTa
1.2 Determinant
Let A be an n×n matrix. det(A)
= =
= = = = = = =
(11) λi = eig(A) (12) (13) (14) Tr(A) + Tr(B) (15) Tr(BCA) = Tr(CAB) (16) Tr(aaT ) (17)
􏰓iλi λi = eig(A) (18)
For n = 2: For n = 3:
det(I + A)
= 1 + det(A) + Tr(A) (25)
det(cA) det(AT ) det(AB) det(A−1 ) det(An ) det(I + uvT )
cn det(A),
det(A)
det(A) det(B)
1/ det(A)
det(A)n 1+uTv
if A ∈ Rn×n (19) (20) (21) (22) (23) (24)
= 􏰒i Aii = 􏰒iλi, = Tr(AT ) = Tr(BA) =
det(I + A) = 1 + det(A) + Tr(A) + 12Tr(A)2 − 21Tr(A2) (26) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 6

1.3 The Special Case 2×2 1 BASICS
For n = 4:
det(I+A) = 1+det(A)+Tr(A)+ 12
+Tr(A)2 − 12Tr(A2)
+16Tr(A)3 − 12Tr(A)Tr(A2)+ 13Tr(A3) (27) For small ε, the following approximation holds
det(I + εA) ∼= 1 + det(A) + εTr(A) + 12ε2Tr(A)2 − 21ε2Tr(A2) (28) 1.3 The Special Case 2×2
Consider the matrix A
Determinant and trace
Eigenvalues
􏰊A11 A12􏰋 A= A21 A22
det(A) = A11A22 − A12A21 (29) Tr(A) = A11 + A22 (30)
λ2 −λ·Tr(A)+det(A) = 0
Tr(A) + 􏰀Tr(A)2 − 4 det(A) Tr(A) − 􏰀Tr(A)2 − 4 det(A) λ1= 2 λ2= 2
Eigenvectors
Inverse
λ1 + λ2 = Tr(A) 􏰊 A12 􏰋
λ1λ2 = det(A)
􏰊 A12 􏰋
v1∝ λ−A
111 211
v2∝ λ−A −1 1 􏰊 A22 −A12 􏰋
A= (31) det(A) −A21 A11
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 7

2 Derivatives
This section is covering differentiation of a number of expressions with respect to a matrix X. Note that it is always assumed that X has no special structure, i.e. that the elements of X are independent (e.g. not symmetric, Toeplitz, positive definite). See section 2.8 for differentiation of structured matrices. The basic assumptions can be written in a formula as
∂Xkl =δikδlj (32) ∂ Xij
that is for e.g. vector forms,
􏰊∂x􏰋 ∂xi 􏰊∂x􏰋 ∂x 􏰊∂x􏰋 ∂xi ∂y = ∂y ∂y = ∂yi ∂y = ∂yj
The following rules are general and very useful when deriving the differential of an expression ([19]):
2.1
2.1.1
(∂X)H Derivatives of a Determinant
i i ij
0
α∂X
∂X+∂Y
Tr(∂ X)
(∂X)Y + X(∂Y) (∂X)◦Y+X◦(∂Y) (∂X)⊗Y+X⊗(∂Y) −X−1(∂X)X−1
Tr(adj(X)∂ X) det(X)Tr(X−1∂X) Tr(X−1∂X)
(∂X)T
(A is a constant) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45)
∂A = ∂(αX) = ∂(X+Y) = ∂ (Tr(X)) = ∂(XY) = ∂(X◦Y) = ∂(X⊗Y) = ∂(X−1) = ∂(det(X)) = ∂(det(X)) = ∂(ln(det(X))) = ∂XT = ∂XH =
General form
∂det(Y) 􏰊 −1∂Y􏰋
k ∂Xik
∂2 det(Y)
2
2 DERIVATIVES
∂x =det(Y)TrY∂x (46) 􏰔∂det(X)Xjk = δijdet(X) (47)
∂x
􏰎 􏰎 ∂∂Y 􏰏 = det(Y) Tr Y−1 ∂x
∂x
􏰊 −1∂Y􏰋 􏰊 −1∂Y􏰋
+Tr Y ∂x Tr Y ∂x
􏰊􏰈 −1 ∂Y􏰉􏰈 −1 ∂Y􏰉􏰋􏰏
−Tr Y ∂x Y ∂x (48) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 8

2.2 Derivatives of an Inverse 2
2.1.2 Linear forms ∂ det(X)
= det(X)(X−1 )T 􏰔∂det(X)Xjk = δijdet(X)
DERIVATIVES
(49) (50)
= det(AXB)(XT )−1 (51)
∂X k ∂Xik
∂ det(AXB) ∂X
= det(AXB)(X−1)T
2.1.3 Square forms
If X is square and invertible, then
∂ det(XT AX) = 2 det(XT AX)X−T (52) ∂X
If X is not square but A is symmetric, then
∂ det(XT AX) = 2 det(XT AX)AX(XT AX)−1 (53)
∂X
If X is not square and A is not symmetric, then
∂ det(XT AX) = det(XT AX)(AX(XT AX)−1 + AT X(XT AT X)−1) (54) ∂X
2.1.4 Other nonlinear forms
Some special cases are (See [9, 7])
∂ ln det(XT X)| ∂X
∂ ln det(XT X)
∂X+
∂ ln | det(X)|
∂X
∂ det(Xk)
= 2(X+ )T (55) = −2XT (56)
(X−1)T = (XT )−1 (57) k det(Xk )X−T (58)
∂Y−1
∂x =−Y ∂xY (59)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 9
= =
∂X
2.2 Derivatives of an Inverse
From [27] we have the basic identity
−1∂Y −1

2.3 Derivatives of Eigenvalues
from which it follows
∂(X−1)kl ∂ Xij
∂aT X−1b ∂X
∂ det(X−1) ∂X
∂Tr(AX−1B) ∂X
∂Tr((X + A)−1) ∂X
=
= = = =
−(X−1)ki(X−1)jl −X−T abT X−T
− det(X−1)(X−1)T −(X−1 BAX−1 )T
2 DERIVATIVES
(60)
(61) (62) (63)
−((X + A)−1(X + A)−1)T (64)
result: Let A be an n × n invertible square J (A) is an n × n -variate and differentiable
From [32] we have the following
matrix, W be the inverse of A, and
function with respect to A, then the partial differentials of J with respect to A and W satisfy
∂J = −A−T ∂J A−T ∂A ∂W
2.3 Derivatives of Eigenvalues
∂ 􏰔eig(X) = ∂ Tr(X) = I (65) ∂X ∂X
∂ 􏰕eig(X) = ∂ det(X) = det(X)X−T (66) ∂X ∂X
If A is real and symmetric, λi and vi are distinct eigenvalues and eigenvectors of A (see (276)) with viT vi = 1, then [33]
2.4
2.4.1
∂λi = viT ∂(A)vi (67) ∂vi = (λiI − A)+∂(A)vi (68)
Derivatives of Matrices, Vectors and Scalar Forms
First Order
∂aTXb ∂X
∂aTXTb ∂X
∂xTa
∂x ∂x
= ∂aTx = a (69)
= abT (70)
= baT (71)
= ∂aTXTa = aaT (72) ∂X ∂X
∂aTXa ∂X
= Jij
= δim (A)nj = = δin (A)mj =
(73) (Jmn A)ij (74) (Jnm A)ij (75)
∂ Xij ∂(XA)ij
∂ Xmn ∂(XT A)ij
∂ Xmn
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 10

2.4 Derivatives of Matrices, Vectors and Scalar Forms
2 DERIVATIVES
(76) (77)
2.4.2
Second Order
∂ 􏰔 XklXmn ∂Xij klmn
∂bT XT Xc ∂X
∂(Bx+b)TC(Dx+d) ∂x
∂(XT BX)kl ∂ Xij
∂xTBx ∂x
∂X
Assume W is symmetric, then
∂ (x−As)TW(x−As) ∂s
∂ (x−s)TW(x−s) ∂x
∂ (x−s)TW(x−s) ∂s
∂ (x−As)TW(x−As) ∂x
∂ (x−As)TW(x−As) ∂A
= 2􏰔Xkl kl
= X(bcT +cbT)
= BTC(Dx+d)+DTCT(Bx+b) (78) = δlj(XTB)ki +δkj(BX)il (79)
∂(XT BX) ∂ Xij
= XT BJij + JjiBX (Jij)kl = δikδjl (80) See Sec 9.7 for useful properties of the Single-entry matrix Jij
As a case with complex values the following
2.4.3 Higher-order and non-linear
n−1
∂(Xn)kl = 􏰔(XrJijXn−1−r)kl (90)
∂Xij r=0 For proof of the above, see B.1.3.
∂ n−1
∂XaT Xnb = 􏰔(Xr)T abT (Xn−1−r)T (91)
r=0
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 11
holds
= −2b(a − xH b)∗ (89) This formula is also known from the LMS algorithm [14]
∂(a−xHb)2 ∂x
= (B+BT)x (81)
∂bT XT DXc ∂X
= DT XbcT + DXcbT (82) ∂ (Xb+c)TD(Xb+c) = (D+DT)(Xb+c)bT (83)
= −2AT W(x − As) (84) = 2W(x − s) (85) = −2W(x − s) (86) = 2W(x − As) (87) = −2W(x − As)sT (88)

2.5 Derivatives of Traces

∂XaT (Xn)T Xnb
2 DERIVATIVES
n−1 􏰗
= 􏰔 Xn−1−rabT (Xn)T Xr
r=0
+(Xr)T XnabT (Xn−1−r)T 􏰘 (92)
See B.1.3 for a proof.
Assume s and r are functions of x, i.e. s = s(x),r = r(x), and that A is a constant, then
∂ 􏰊∂s􏰋T 􏰊∂r􏰋T
∂xsTAr = ∂x Ar+ ∂x ATs (93)
∂ (Ax)T(Ax) = ∂ xTATAx (94) ∂x (Bx)T (Bx) ∂x xT BT Bx
= 2 ATAx −2xTATAxBTBx (95) xT BBx (xT BT Bx)2
2.4.4 Gradient and Hessian
Using the above we have for the gradient and the Hessian
f = xTAx+bTx (96)
∇xf = ∂f ∂x
∂2f ∂x∂xT
2.5 Derivatives of Traces
Assume F(X) to be a differentiable function of each of the elements of X. It
then holds that
where f(·) is the scalar derivative of F(·).
= (A+AT)x+b (97) = A+AT (98)
∂Tr(F(X)) = f(X)T ∂X
2.5.1 First Order
∂ Tr(X) ∂X
∂ Tr(XA) ∂X
∂ Tr(AXB) ∂X
∂ Tr(AXT B) ∂X
∂ Tr(XT A) ∂X
∂ Tr(AXT ) ∂X
∂ Tr(A ⊗ X) ∂X
= I
= AT
= AT BT = BA
= A
= A
= Tr(A)I
(99) (100) (101) (102) (103) (104) (105)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 12

2.5 Derivatives of Traces
2
= 2XT
= (XB + BX)T
= BX+BTX
= BX+BTX
= BX+BTX
= XBT+XB
= XBT+XB
= XBT+XB
= ATXTBT +BTXTAT
2.5.2
Second Order
∂ ∂X
∂ Tr(BT XT CXB) ∂X
∂ Tr􏰃XTBXC􏰄 ∂X
∂ Tr(AXBXT C) ∂X
Tr􏰗(AXB + C)(AXB + C)T 􏰘
See [7].
2.5.3 Higher Order
∂ Tr(X2) ∂X
∂ Tr(X2B) ∂X
∂ Tr(XT BX) ∂X
∂ Tr(BXXT ) ∂X
∂ Tr(XXT B) ∂X
∂ Tr(XBXT ) ∂X
∂ Tr(BXT X) ∂X
∂ Tr(XT XB) ∂X
∂ Tr(AXBX) ∂X
(106) (107) (108) (109) (110) (111) (112) (113) (114) (115) (116) (117) (118) (119)
∂ Tr(XTX)
∂X ∂X
∂ ∂X
= ∂ Tr(X)Tr(X) = 2Tr(X)I(120) ∂X
Tr(X ⊗ X)
∂ Tr(Xk) ∂X
∂XTr(AXk) ∂ Tr 􏰃BT XT CXXT CXB􏰄
∂X
= k(Xk−1)T ∂ k−1
(121) (122)
(123)
= ∂ Tr(XXT) = 2X = CTXBBT +CXBBT = BXC+BTXCT
= ATCTXBT +CAXB = 2AT (AXB + C)BT
= 􏰔(XrAXk−r−1)T r=0
= CXXT CXBBT +CTXBBTXTCTX
+CXBBT XT CX +CT XXT CT XBBT
DERIVATIVES
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 13

2.6 Derivatives of vector norms 2 DERIVATIVES
2.5.4 Other
∂ Tr(AX−1B) = −(X−1BAX−1)T = −X−T AT BT X−T (124)
∂X
Assume B and C to be symmetric, then
∂ Tr􏰗(XT CX)−1A􏰘 = ∂X
∂ Tr􏰗(XT CX)−1(XT BX)􏰘 = ∂X
∂ Tr􏰗(A + XT CX)−1(XT BX)􏰘 = ∂X
See [7].
−(CX(XT CX)−1)(A + AT )(XT CX)−1
−2CX(XT CX)−1XT BX(XT CX)−1 +2BX(XT CX)−1
(125)
2.6.1 Two-norm
∂ Tr(sin(X)) ∂X
= cos(X)T 2.6 Derivatives of vector norms
(126) −2CX(A + XT CX)−1XT BX(A + XT CX)−1
+2BX(A + XT CX)−1
(127)
(128)
(129) (130) (131)
∂ ||x−a||2 = ∂x
x−a ||x − a||2
(x−a)(x−a)T 222
∂||x||2 = ∂||xT x||2 = 2x ∂x ∂x
2.7 Derivatives of matrix norms
For more on matrix norms, see Sec. 10.4.
2.7.1 Frobenius norm
∂||X||2F= ∂Tr(XXH)=2X
∂ x−a I
∂x∥x−a∥ = ∥x−a∥ − ∥x−a∥3
(132) See (248). Note that this is also a special case of the result in equation 119.
∂X ∂X
2.8 Derivatives of Structured Matrices
Assume that the matrix A has some structure, i.e. symmetric, toeplitz, etc. In that case the derivatives of the previous section does not apply in general. Instead, consider the following general rule for differentiating a scalar function f (A)
􏰎􏰊∂f􏰋T ∂A􏰏 dAij kl ∂Akl ∂Aij ∂A ∂Aij
df ∂f ∂A
=􏰔 kl =Tr
(133)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 14

2.8 Derivatives of Structured Matrices 2 DERIVATIVES
The matrix differentiated with respect to itself is in this document referred to as the structure matrix of A and is defined simply by
∂A =Sij (134) ∂ Aij
If A has no special structure we have simply Sij = Jij, that is, the structure matrix is simply the single-entry matrix. Many structures have a representation in singleentry matrices, see Sec. 9.7.6 for more examples of structure matrices.
2.8.1 The Chain Rule
Sometimes the objective is to find the derivative of a matrix which is a function of another matrix. Let U = f(X), the goal is to find the derivative of the function g(U) with respect to X:
∂g(U) = ∂g(f(X)) ∂X ∂X
Then the Chain Rule can then be written the following way:
∂g(U) ∂g(U) M N = =􏰔􏰔
(135)
(136)
(137)
(138)
(139) (140) (141)
(142)
That is, e.g., ([5]):
∂ Tr(AX)
∂X
∂ det(X)
∂X
∂ ln det(X)
∂X 2.8.3 Diagonal
= A+AT −(A◦I), see (142)
= det(X)(2X−1 − (X−1 ◦ I))
= 2X−1 −(X−1 ◦I)
∂g(U) ∂ukl ∂X ∂xij k=1 l=1 ∂ukl ∂xij
Using matrix notation, this can be written as: ∂g(U) = Tr􏰗(∂g(U))T
∂Xij ∂U
∂U 􏰘. ∂Xij
2.8.2 Symmetric
If A is symmetric, then Sij = Jij + Jji − Jij Jij and therefore
df 􏰊 ∂f 􏰋 􏰊 ∂f 􏰋T 􏰊 ∂f 􏰋 dA=∂A+∂A −diag∂A
If X is diagonal, then ([19]):
∂ Tr(AX)
∂X
= A◦I
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 15

2.8 Derivatives of Structured Matrices 2 DERIVATIVES
2.8.4 Toeplitz
Like symmetric matrices and diagonal matrices also Toeplitz matrices has a special structure which should be taken into account when the derivative with respect to a matrix with Toeplitz structure.
=
∂Tr(AT) ∂T
∂Tr(TA) ∂T
 Tr(A)
 Tr([AT ]  1n
…. . .. ..
(143)
))
=  Tr([[AT]1n]2,n−1)
. .
≡ α(A)
Tr([AT ]n1) Tr(A)
Tr([[AT ]1n]n−1,2) …
… .. ..
.. ..
 Tr([[AT]1n]n−1,2) 
A1n · · · Tr([[AT ]1n]2,n−1) Tr([AT ]1n))
As it can be seen, the derivative α(A) also has a Toeplitz structure. Each value in the diagonal is the sum of all the diagonal valued in A, the values in the diagonals next to the main diagonal equal the sum of the diagonal next to the main diagonal in AT . This result is only valid for the unconstrained Toeplitz matrix. If the Toeplitz matrix also is symmetric, the same derivative yields
∂Tr(AT) = ∂Tr(TA) = α(A) + α(A)T − α(A) ◦ I (144) ∂T ∂T
An1  . . . 
· · ·
Tr([AT ]n1) Tr(A)

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 16

AA−1 = A−1A = I,
where I is the n × n identity matrix. If A−1 exists, A is said to be nonsingular.
Otherwise, A is said to be singular (see e.g. [12]). 3.1.2 Cofactors and Adjoint
The submatrix of a matrix A, denoted by [A]ij is a (n − 1) × (n − 1) matrix obtained by deleting the ith row and the jth column of A. The (i,j) cofactor of a matrix is defined as
cof(A, i, j ) = (−1)i+j det([A]ij ), The matrix of cofactors can be created from the cofactors
cof(A, 1, 1) · · · cof(A, 1, n) 
 . .  cof(A) =  . cof(A,i,j) . 
 
cof(A,n,1) ··· cof(A,n,n) The adjoint matrix is the transpose of the cofactor matrix
adj(A) = (cof(A))T ,
3.1.3 Determinant
The determinant of a matrix A ∈ Cn×n is defined as (see [12])
(146)
(147)
(148)
(149) (150)
(151)
n
􏰔(−1)j+1A1j det([A]1j) j=1
det(A) =
= 􏰔 A1j cof(A, 1, j).
3.1.4 Construction
det(A) For the case of 2 × 2 matrices, see section 1.3.
n j=1
The inverse matrix can be constructed, using the adjoint matrix, by A−1 = 1 · adj(A)
3 INVERSES
3 Inverses
3.1 Basic
3.1.1 Definition
The inverse A−1 of a matrix A ∈ Cn×n is defined such that
(145)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 17

3.2 Exact Relations 3 INVERSES
3.1.5 Condition number
The condition number of a matrix c(A) is the ratio between the largest and the
smallest singular value of a matrix (see Section 5.3 on singular values),
c(A) = d+ (152)
d−
The condition number can be used to measure how singular a matrix is. If the condition number is large, it indicates that the matrix is nearly singular. The condition number can also be estimated from the matrix norms. Here
c(A) = ∥A∥ · ∥A−1∥, (153)
where ∥ · ∥ is a norm such as e.g the 1-norm, the 2-norm, the ∞-norm or the Frobenius norm (see Sec 10.4 for more on matrix norms).
The 2-norm of A equals 􏰀(max(eig(AHA))) [12, p.57]. For a symmetric matrix, this reduces to ||A||2 = max(|eig(A)|) [12, p.394]. If the matrix is symmetric and positive definite, ||A||2 = max(eig(A)). The condition number based on the 2-norm thus reduces to
∥A∥2∥A−1∥2 = max(eig(A))max(eig(A−1)) = max(eig(A)). min(eig(A))
3.2 Exact Relations
3.2.1 Basic
(AB)−1 = B−1A−1 3.2.2 The Woodbury identity
(154)
(155)
The Woodbury identity comes in many variants. The latter of the two can be found in [12]
(A + CBCT )−1 = A−1 − A−1C(B−1 + CT A−1C)−1CT A−1 (A+UBV)−1 = A−1−A−1U(B−1+VA−1U)−1VA−1
If P, R are positive definite, then (see [30])
(P−1 + BT R−1B)−1BT R−1 = PBT (BPBT + R)−1
3.2.3 The Kailath Variant
(A + BC)−1 = A−1 − A−1B(I + CA−1B)−1CA−1
See [4, page 153].
3.2.4 Sherman-Morrison
T −1 −1 A−1bcTA−1
(156) (157)
(158)
(159)
(A+bc ) =A − 1+cTA−1b
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 18
(160)

3.2 Exact Relations
3.2.5 The Searle Set of Identities
The following set of identities, can be found in [25, page 151],
3 INVERSES
(I + A−1)−1 (A + BBT )−1B
(A−1 + B−1)−1 A−A(A+B)−1A
A−1 + B−1 (I + AB)−1
= A(A + I)−1
= A−1 B(I + BT A−1 B)−1
= A(A + B)−1B = B(A + B)−1A = B−B(A+B)−1B
= A−1 (A + B)B−1
= I−A(I+BA)−1B
= A(I + BA)−1
Denote A = (XT X)−1 and that X is extended to include a new column vector in the end X ̃ = [X v]. Then [34]
−AXTv 􏰏 vTv−vTXAXTv
1
The following is a rank-1 update for the Moore-Penrose pseudo-inverse of real valued matrices and proof can be found in [18]. The matrix G is defined below:
(I + AB)−1A
3.2.6 Rank-1 update of inverse of inner product
(161) (162) (163) (164) (165) (166) (167)
􏰎A+ AXTvvTXAT ̃T ̃ −1 vTv−vTXAXTv
(X X) = −vTXAT
vT v−vT XAXT v
vT v−vT XAXT v 3.2.7 Rank-1 update of Moore-Penrose Inverse
Using the the notation
(A+cdT)+ =A+ +G
(168)
(169) (170) (171) (172) (173)
β = v = n =
w = m =
1+dTA+c A+c
(A+)Td
(I − AA+)c (I−A+A)Td
the solution is given as six different cases, depending on the entities ||w||,
||m||, and β. Please note, that for any (column) vector v it holds that v+ =
vT (vT v)−1 = vT . The solution is: ||v||2
Case1of6: If||w||≠ 0and||m||≠ 0. Then
G = −vw+ − (m+)T nT + β(m+)T w+
= − 1 vwT − 1 mnT + β mwT ||w||2 ||m||2 ||m||2||w||2
Case2of6: If||w||=0and||m||≠ 0andβ=0. Then G = −vv+A+ − (m+)T nT
= − 1 vvTA+− 1 mnT ||v||2 ||m||2
(174) (175)
(176) (177)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 19

3.3 Implication on Inverses 3 INVERSES
Case3of6: If||w||=0andβ̸=0. Then
1 β 􏰈||v||2 􏰉􏰈||m||2 􏰉T
G=βmvTA+−||v||2||m||2+|β|2 β m+v
Case4of6: If||w||≠ 0and||m||=0andβ=0. Then G = −A+nn+ − vw+
=−1A+nnT− 1vwT ||n||2 ||w||2
Case5of6: If||m||=0andβ̸=0. Then
1 β 􏰈||w||2 􏰉􏰈||n||2 􏰉T
G= βA+nwT −||n||2||w||2 +|β|2 β A+n+v β w+n Case6of6: If||w||=0and||m||=0andβ=0. Then
(179) (180)
(181)
(182) (183)
(184)
(185)
holds
(186)
(187)
3.3
||v||2 ||n||2 ||v||2||n||2 Implication on Inverses
If See [25].
G = −vv+A+ − A+nn+ + v+A+nvn+
= − 1 vvTA+− 1 A+nnT+ vTA+nvnT
β (A+)Tv+n (178)
(A+B)−1 =A−1 +B−1 then AB−1A=BA−1B 3.3.1 A PosDef identity
Assume P, R to be positive definite and invertible, then
(P−1 + BT R−1B)−1BT R−1 = PBT (BPBT + R)−1
See [30].
3.4 Approximations
The following identity is known as the Neuman series of a matrix, which when |λi| < 1 for all eigenvalues λi which is equivalent to ∞ (I−A)−1 =􏰔An n=0 ∞ (I + A)−1 = 􏰔(−1)nAn n=0 When |λi| < 1 for all eigenvalues λi, it holds that A → 0 for n → ∞, and the following approximations holds (I−A)−1 ∼= I+A+A2 (188) (I+A)−1 ∼= I−A+A2 (189) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 20 3.5 Generalized Inverse 3 INVERSES The following approximation is from [22] and holds when A large and symmetric A − A ( I + A ) − 1 A ∼= I − A − 1 If σ2 is small compared to Q and M then (190) (191) (192) (193) (194) (195) (196) (197) Proof: (Q + σ2M)−1 ∼= Q−1 − σ2Q−1MQ−1 (Q + σ2M)−1 (QQ−1Q + σ2MQ−1Q)−1 ((I + σ2MQ−1)Q)−1 Q−1(I + σ2MQ−1)−1 = = = Q−1 − σ2Q−1MQ−1 This can be rewritten using the Taylor expansion: Q−1(I + σ2MQ−1)−1 = Q−1(I − σ2MQ−1 + (σ2MQ−1)2 − ...) ∼= 3.5 Generalized Inverse 3.5.1 Definition A generalized inverse matrix of the matrix A is any matrix A− such that (see [26]) The matrix A− is not unique. 3.6 Pseudo Inverse AA−A = A (198) 3.6.1 Definition The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A+ that fulfils I AA+A=A II A+AA+ = A+ III AA+ symmetric IV A+ A symmetric The matrix A+ is unique and does always exist. Note that in case of com- plex matrices, the symmetric condition is substituted by a condition of being Hermitian. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 21 3.6 Pseudo Inverse 3 INVERSES 3.6.2 Properties Assume A+ to be the pseudo-inverse of A, then (See [3] for some of them) (A+ )+ (AT )+ (AH )+ (A∗ )+ (A+ A)AH (A+ A)AT (cA)+ A+ A+ (AT A)+ (AAT )+ A+ A+ (AH A)+ (AAH )+ (AB)+ f(AHA) − f(0)I f(AAH) − f(0)I = = = = = ̸= = = = = = = = = = = = = A (A+ )T (A+ )H (A+ )∗ AH AT (1/c)A+ (AT A)+ AT AT (AAT )+ A+ (AT )+ (AT )+ A+ (AH A)+ AH AH (AAH )+ A+ (AH )+ (AH )+ A+ (A+ AB)+ (ABB+ )+ A+[f(AAH)−f(0)I]A A[f (AH A) − f (0)I]A+ (199) (200) (201) (202) (203) (204) (205) (206) (207) (208) (209) (210) (211) (212) (213) (214) (215) (216) (217) (218) (219) (220) (221) (222) where A ∈ Cn×m. Assume A to have full rank, then (AA+ )(AA+ ) = (A+ A)(A+ A) = Tr(AA+ ) = Tr(A+ A) = For two matrices it hold that (AB)+ = (A⊗B)+ = 3.6.3 Construction Assume that A has full rank, then AA+ A+A rank(AA+ ) rank(A+ A) (See (See [26]) [26]) An×n Square An×m Broad An×m Tall rank(A)=n ⇒ rank(A)=n ⇒ rank(A)=m ⇒ A+ =A−1 A+ =AT(AAT)−1 A+ =(ATA)−1AT (A+ AB)+ (ABB+ )+ A+⊗B+ The so-called ”broad version” is also known as right inverse and the ”tall ver- sion” as the left inverse. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 22 3.6 Pseudo Inverse 3 INVERSES Assume A does not have full rank, i.e. A is n×m and rank(A) = r < min(n, m). The pseudo inverse A+ can be constructed from the singular value decomposition A = UDVT , by A+ = V D−1UT (223) rrr where Ur,Dr, and Vr are the matrices with the degenerated rows and columns deleted. A different way is this: There do always exist two matrices C n × r and D r × m of rank r, such that A = CD. Using these matrices it holds that A+ = DT (DDT )−1(CT C)−1CT (224) See [3]. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 23 4 Complex Matrices The complex scalar product r = pq can be written as 􏰊Rr􏰋 􏰊Rp −Ip􏰋􏰊Rq􏰋 Ir = Ip Rp Iq (225) 4.1 Complex Derivatives In order to differentiate an expression f(z) with respect to a complex z, the Cauchy-Riemann equations have to be satisfied ([7]): df(z) = ∂R(f(z)) + i∂I(f(z)) dz ∂Rz ∂Rz (226) (227) (228) 4 COMPLEX MATRICES and or in a more compact form: df(z) = −i∂R(f(z)) + ∂I(f(z)) dz ∂Iz ∂Iz • Generalized Complex Derivative: df(z) = 1􏰆∂f(z) − i∂f(z)􏰇. dz 2 ∂Rz ∂Iz • Conjugate Complex Derivative df(z) = 1􏰆∂f(z) + i∂f(z)􏰇. dz∗ 2 ∂Rz ∂Iz (229) (230) ∂f(z) = i∂f(z). ∂Iz ∂Rz A complex function that satisfies the Cauchy-Riemann equations for points in a region R is said yo be analytic in this region R. In general, expressions involving complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann equations. In order to avoid this problem, a more generalized definition of complex derivative is used ([24], [6]): The Generalized Complex Derivative equals the normal derivative, when f is an analytic function. For a non-analytic function such as f(z) = z∗, the derivative equals zero. The Conjugate Complex Derivative equals zero, when f is an analytic function. The Conjugate Complex Derivative has e.g been used by [21] when deriving a complex gradient. Notice: df(z) ̸= ∂f(z) + i∂f(z). (231) dz ∂Rz ∂Iz • Complex Gradient Vector: If f is a real function of a complex vector z, then the complex gradient vector is given by ([14, p. 798]) ∇f (z) = 2 df (z) (232) dz∗ = ∂f(z)+i∂f(z). ∂Rz ∂Iz Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 24 4.1 Complex Derivatives 4 COMPLEX MATRICES • Complex Gradient Matrix: If f is a real function of a complex matrix Z, then the complex gradient matrix is given by ([2]) 4.1.1 ∇f (Z) = 2 df (Z) dZ∗ = ∂f(Z)+i∂f(Z). ∂RZ ∂IZ These expressions can be used for gradient descent algorithms. The Chain Rule for complex numbers (233) The chain rule is a little more complicated when the function of a complex u = f(x) is non-analytic. For a non-analytic function, the following chain rule can be applied ([7]) ∂g(u) ∂g ∂u ∂g ∂u∗ ∂x = ∂u∂x+∂u∗ ∂x (234) = ∂g∂u+􏰆∂g∗􏰇∗∂u∗ ∂u ∂x ∂u ∂x Notice, if the function is analytic, the second term reduces to zero, and the func- tion is reduced to the normal well-known chain rule. For the matrix derivative of a scalar function g(U), the chain rule can be written the following way: ∂g(U) T ∂g(U) T ∗ ∂g(U)=Tr(( ∂U ) ∂U)+Tr((∂U∗ ) ∂U). (235) ∂X ∂X ∂X 4.1.2 Complex Derivatives of Traces If the derivatives involve complex numbers, the conjugate transpose is often in- volved. The most useful way to show complex derivative is to show the derivative with respect to the real and the imaginary part separately. An easy example is: ∂Tr(X∗) = ∂Tr(XH) = I (236) ∂RX ∂RX i∂Tr(X∗)=i∂Tr(XH) = I (237) ∂IX ∂IX Since the two results have the same sign, the conjugate complex derivative (230) should be used. ∂Tr(X) = ∂Tr(XT ) = I (238) ∂RX ∂RX i∂Tr(X)=i∂Tr(XT) = −I (239) ∂IX ∂IX Here, the two results have different signs, and the generalized complex derivative (229) should be used. Hereby, it can be seen that (100) holds even if X is a complex number. (240) (241) ∂IX Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 25 ∂Tr(AXH) ∂RX = A i∂Tr(AXH) = A 4.2 Higher order and non-linear derivatives 4 COMPLEX MATRICES (242) (243) (244) (245) (246) (247) ∂ Tr(AX∗ ) ∂RX = AT i∂Tr(AX∗) = AT ∂IX ∂Tr(XXH) = ∂Tr(XHX) = 2RX ∂RX ∂RX i∂Tr(XXH) = i∂Tr(XHX) = i2IX ∂IX ∂IX By inserting (244) and (245) in (229) and (230), it can be seen that ∂Tr(XXH) = X∗ ∂X ∂Tr(XXH) = X ∂X∗ Since the function Tr(XXH) is a real function of the complex matrix X, the complex gradient matrix (233) is given by ∇Tr(XXH) = 2∂Tr(XXH) = 2X (248) ∂X∗ 4.1.3 Complex Derivative Involving Determinants Here, a calculation example is provided. The objective is to find the derivative of det(XHAX) with respect to X ∈ Cm×n. The derivative is found with respect to the real part and the imaginary part of X, by use of (42) and (37), det(XH AX) can be calculated as (see App. B.1.4 for details) ∂ det(XHAX) = 1􏰆∂ det(XHAX) − i∂ det(XHAX)􏰇 ∂X 2 ∂RX ∂IX = det(XH AX)􏰁(XH AX)−1 XH A􏰂T and the complex conjugate derivative yields (249) (250) (251) (252) ∂ det(XHAX) = ∂X∗ 1􏰆∂ det(XHAX) + i∂ det(XHAX)􏰇 2 ∂RX ∂IX det(XH AX)AX(XH AX)−1 4.2 Higher order and non-linear derivatives ∂ (Ax)H(Ax) = ∂ xHAHAx ∂x (Bx)H (Bx) ∂x xH BH Bx = 2 AHAx −2xHAHAxBHBx xH BBx (xH BH Bx)2 = Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 26 4.3 Inverse of complex sum 4 COMPLEX MATRICES 4.3 Inverse of complex sum Given real matrices A, B find the inverse of the complex sum A + iB. Form the auxiliary matrices E = A+tB (253) F = B−tA, (254) and find a value of t such that E−1 exists. Then (A + iB)−1 = (1 − it)(E + iF)−1 (255) = (1 − it)((E + FE−1 F)−1 − i(E + FE−1 F)−1 FE−1 )(256) = (1 − it)(E + FE−1 F)−1 (I − iFE−1 ) = (E + FE−1F)−1((I − tFE−1) − i(tI + FE−1)) = (E + FE−1 F)−1 (I − tFE−1 ) −i(E + FE−1F)−1(tI + FE−1) (257) (258) (259) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 27 5 5.1 5.1.1 5 SOLUTIONS AND DECOMPOSITIONS Solutions and Decompositions Solutions to linear equations Simple Linear Regression Assume we have data (xn , yn ) for n = 1, ..., N and are seeking the parameters a, b ∈ R such that yi ∼= axi + b. With a least squares error function, the optimal values for a, b can be expressed using the notation x = (x1,...,xN)T y = (y1,...,yN)T 1 = (1,...,1)T ∈ RN×1 and as Rxx =xTx Rx1 =xT1 R11 =1T1 Ryx =yTx Ry1 =yT1 􏰊a􏰋 􏰊Rxx Rx1 􏰋−1􏰊Rx,y 􏰋 b=RRR (260) x111 y1 Existence in Linear Systems 5.1.2 Assume A is n × m and consider the linear system Ax = b Construct the augmented matrix B = [A b] then (261) Condition rank(A) = rank(B) = m rank(A) = rank(B) < m rank(A) < rank(B) 5.1.3 Standard Square Assume A is square and invertible, then Solution Unique solution x Many solutions x No solutions x x = A−1b AssumeAisn×nbutofrankr m (tall) and rank(A) = m, then
xi = detB, det A
(264)
Ax = b ⇒ x = (AT A)−1AT b = A+b
that is if there exists a solution x at all! If there is no solution the following
can be useful:
Now xmin is the vector x which minimizes ||Ax − b||2, i.e. the vector which is
Ax=b ⇒ xmin =A+b (266) ”least wrong”. The matrix A+ is the pseudo-inverse of A. See [3].
5.1.7 Under-determined Rectangular
Assume A is n × m and n < m (”broad”) and rank(A) = n. Ax=b ⇒ xmin =AT(AAT)−1b (267) The equation have many solutions x. But xmin is the solution which minimizes ||Ax − b||2 and also the solution with the smallest norm ||x||2 . The same holds for a matrix version: Assume A is n×m, X is m×n and B is n×n, then AX=B ⇒ Xmin =A+B (268) The equation have many solutions X. But Xmin is the solution which minimizes ||AX − B||2 and also the solution with the smallest norm ||X||2. See [3]. Similar but different: Assume A is square n × n and the matrices B0 , B1 aren×N,whereN>n,thenifB0 hasmaximalrank
AB0 = B1 ⇒ Amin = B1BT0 (B0BT0 )−1 (269)
where Amin denotes the matrix which is optimal in a least square sense. An interpretation is that A is the linear approximation which maps the columns vectors of B0 into the columns vectors of B1.
5.1.8 Linear form and zeros
Ax = 0, ∀x ⇒
5.1.9 Square form and zeros
If A is symmetric, then
A = 0
(270)
xT Ax = 0, ∀x ⇒
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 29
A = 0
(271)
(265)

5.2 EigenvaluesandEigenvectors5 SOLUTIONSANDDECOMPOSITIONS
5.1.10 The Lyapunov Equation
AX+XB = C (272)
vec(X) = (I ⊗ A + BT ⊗ I)−1vec(C) (273) Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec op-
erator.
5.1.11 Encapsulating Sum
􏰒nAnXBn = C (274)
vec(X) = 􏰁􏰒nBTn ⊗ An􏰂−1 vec(C) (275) See Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec
operator.
5.2 Eigenvalues and Eigenvectors
5.2.1 Definition
The eigenvectors vi and eigenvalues λi are the ones satisfying
Avi = λivi
(276)
5.2.2 Decompositions
For matrices A with as many distinct eigenvalues as dimensions, the following
holds, where the columns of V are the eigenvectors and (D)ij = δijλi,
AV = VD (277)
For defective matrices A, which is matrices which has fewer distinct eigenvalues than dimensions, the following decomposition called Jordan canonical form, holds
AV = VJ (278)
where J is a block diagonal matrix with the blocks Ji = λiI + N. The matrices Ji have dimensionality as the number of identical eigenvalues equal to λi, and N is square matrix of same size with 1 on the super diagonal and zero elsewhere.
It also holds that for all matrices A there exists matrices V and R such that
AV = VR
where R is upper triangular with the eigenvalues λi on its diagonal.
5.2.3 General Properties
Assume that A ∈ Rn×m and B ∈ Rm×n,
eig(AB) = eig(BA)
rank(A) = r ⇒ At most r non-zero λi
(279)
(280) (281)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 30

5.3 Singular Value Decompositio5n
SOLUTIONS AND DECOMPOSITIONS
5.2.4 Symmetric
Assume A is symmetric, then
VVT = λi ∈ Tr(Ap ) = eig(I + cA) = eig(A − cI) = eig(A−1 ) =
For a symmetric, positive matrix A,
eig(AT A) = eig(AAT ) = eig(A) ◦ eig(A)
5.2.5 Characteristic polynomial
The characteristic polynomial for the matrix A is
0 = det(A − λI)
= λn − g1λn−1 + g2λn−2 − … + (−1)ngn
I
R
(i.e. V is orthogonal) (i.e. λi is real)
(282) (283) (284) (285) (286) (287)
(288)
(289) (290)
􏰒 i λ pi 1+cλi
λi−c λ−1
i
Note that the coefficients gj for j = 1, …, n are the n invariants under rotation of A. Thus, gj is the sum of the determinants of all the sub-matrices of A taken j rows and columns at a time. That is, g1 is the trace of A, and g2 is the sum of the determinants of the n(n − 1)/2 sub-matrices that can be formed from A by deleting all but two rows and columns, and so on – see [17].
5.3 Singular Value Decomposition
Any n × m matrix A can be written as
A = UDVT ,
(291)
(292)
􏰃A􏰄=􏰃V􏰄􏰃D􏰄􏰃VT 􏰄,
where D is diagonal with the eigenvalues of A, and V is orthogonal and the
eigenvectors of A.
5.3.2 Square decomposed into squares
Assume A ∈ Rn×n. Then
􏰃A􏰄=􏰃V􏰄􏰃D􏰄􏰃UT 􏰄, (294)
where D is diagonal with the square root of the eigenvalues of AAT , V is the eigenvectors of AAT and UT is the eigenvectors of AT A.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 31
where
eigenvectors of AAT 􏰀diag(eig(AAT )) eigenvectors of ATA
n × n n × m m×m
U = D = V =
5.3.1 Symmetric Square decomposed into squares Assume A to be n × n and symmetric. Then
(293)

5.4 Triangular Decomposition 5 SOLUTIONS AND DECOMPOSITIONS
5.3.3 Square decomposed into rectangular
Assume V∗D∗UT∗ = 0 then we can expand the SVD of A into
􏰃 􏰄􏰃 􏰄􏰊D0􏰋􏰊UT􏰋 A=VV∗0DUT, (295)
∗∗
where the SVD of A is A = VDUT .
5.3.4 Rectangular decomposition I AssumeAisn×m,Visn×n,Disn×n,UT isn×m
􏰃 A 􏰄=􏰃 V 􏰄􏰃 D 􏰄􏰃 UT 􏰄, (296) where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A. 5.3.5 Rectangular decomposition II
AssumeAisn×m,Visn×m,Dism×m,UT ism×m 
􏰃 A 􏰄=􏰃 V 􏰄 D  UT  5.3.6 Rectangular decomposition III
AssumeAisn×m,Visn×n,Disn×m,UT ism×m 
(297)
􏰃 A 􏰄=􏰃 V 􏰄􏰃 D 􏰄 UT ,
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
5.4 Triangular Decomposition
5.5 LU decomposition
Assume A is a square matrix with non-zero leading principal minors, then
A = LU (299) where L is a unique unit lower triangular matrix and U is a unique upper
triangular matrix.
5.5.1 Cholesky-decomposition
Assume A is a symmetric positive definite square matrix, then
A = UT U = LLT , (300) where U is a unique upper triangular matrix and L is a lower triangular matrix.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 32
(298)

5.6 LDM decomposition 5 SOLUTIONS AND DECOMPOSITIONS
5.6 LDM decomposition
Assume A is a square matrix with non-zero leading principal minors1, then
A = LDMT (301)
where L, M are unique unit lower triangular matrices and D is a unique diagonal matrix.
5.7 LDL decompositions
The LDL decomposition are special cases of the LDM decomposition. Assume A is a non-singular symmetric definite square matrix, then
A=LDLT =LTDL (302) where L is a unit lower triangular matrix and D is a diagonal matrix. If A is
also positive definite, then D has strictly positive diagonal entries.
1If the matrix that corresponds to a principal minor is a quadratic upper-left part of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the principal minor is called a leading principal minor. For an n times n square matrix, there are n leading principal minors. [31]
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 33

or alternatively as
(M)ij = ⟨(xi − ⟨xi⟩)(xj − ⟨xj⟩)⟩ M = ⟨(x − m)(x − m)T ⟩
6.1.3 Third moments
The matrix of third centralized moments – in some contexts referred coskewness – is defined using the notation
as
m(3) = ⟨(xi − ⟨xi⟩)(xj − ⟨xj⟩)(xk − ⟨xk⟩)⟩ ijk
􏰗 (3) (3) (3)􏰘 M3 = m::1 m::2 …m::n
6 STATISTICS AND PROBABILITY
6 Statistics and Probability 6.1 Definition of Moments
Assume x ∈ Rn×1 is a random variable 6.1.1 Mean
The vector of means, m, is defined by
(m)i = ⟨xi⟩
6.1.2 Covariance
The matrix of covariance M is defined by
(303)
(304) (305)
to as
(306)
(307)
where ’:’ denotes all elements within the given index. M3 can alternatively be expressed as
M3 =⟨(x−m)(x−m)T ⊗(x−m)T⟩ 6.1.4 Fourth moments
The matrix of fourth centralized moments – in some contexts referred cokurtosis – is defined using the notation
(308)
to as (309)
(310)
(311)
m(4) = ⟨(xi − ⟨xi⟩)(xj − ⟨xj⟩)(xk − ⟨xk⟩)(xl − ⟨xl⟩)⟩ ijkl
M = 􏰗m(4) m(4) …m(4) |m(4) m(4) …m(4) |…|m(4) m(4) …m(4) 􏰘 4 ::11 ::21 ::n1 ::12 ::22 ::n2 ::1n ::2n ::nn
or alternatively as
M4 =⟨(x−m)(x−m)T ⊗(x−m)T ⊗(x−m)T⟩
as
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 34

6.2 Expectation of Linear Combinatio6ns STATISTICS AND PROBABILITY
6.2 Expectation of Linear Combinations
6.2.1 Linear Forms
Assume X and x to be a matrix and a vector of random variables. Then (see
See [26])
E[Ax + b] = E[Ax] = E[x+b] =
6.2.2 Quadratic Forms
Assume A is symmetric, c = E[x] and coordinates xi are independent, have the and denote a = diag(A). Then (See [26])
Am+b Am m+b
E[AXB + C] = AE[X]B + C Var[Ax] = AVar[x]AT
(312) (313) (314)
(315) (316) (317)
Cov[Ax, By] = ACov[x, y]BT Assume x to be a stochastic vector with mean m, then (see [7])
E[xT Ax] = Tr(AΣ) + cT Ac
Var[xT Ax] = 2μ2Tr(A2) + 4μ2cT A2c + 4μ3cT Aa + (μ4 − 3μ2)aT a
Also, assume x to be a stochastic vector with mean m, and covariance M. (see [7])
(318) (319)
Then
(320) (321) (322) (323) (324) (325)
(326) (327) (328) (329) (330)
E[(Ax + a)(Bx + b)T ] E[xxT ] E [xaT x] E [xT axT ] E [(Ax)(Ax)T ] E[(x+a)(x+a)T]
E[(Ax+a)T(Bx+b)] = E[xT x] = E[xT Ax] = E[(Ax)T (Ax)] = E[(x+a)T(x+a)] =
See [7].
= = = = = =
AMBT + (Am + a)(Bm + b)T M+mmT
(M+mmT)a
aT(M+mmT)
A(M + mmT )AT M+(m+a)(m+a)T
Tr(AMBT)+(Am+a)T(Bm+b) Tr(M)+mTm
Tr(AM) + mT Am
Tr(AMAT ) + (Am)T (Am) Tr(M)+(m+a)T(m+a)
Σ = Var[x]. Assume also that all same central moments μ1,μ2,μ3,μ4
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 35

6.3 Weighted Scalar Variable 6 STATISTICS AND PROBABILITY
6.2.3 Cubic Forms
Assume x to be a stochastic vector with independent coordinates, mean m, covariance M and central moments v3 = E[(x − m)3]. Then (see [7])
E[(Ax+a)(Bx+b)T(Cx+c)] =
E[xxT x] = E[(Ax+a)(Ax+a)T(Ax+a)] =
E[(Ax+a)bT(Cx+c)(Dx+d)T] =
6.3 Weighted Scalar Variable
Adiag(BT C)v3
+Tr(BMCT )(Am + a)
+AMCT (Bm + b)
+(AMBT +(Am+a)(Bm+b)T)(Cm+c)
v3 +2Mm+(Tr(M)+mTm)m
Adiag(AT A)v3
+[2AMAT +(Ax+a)(Ax+a)T](Am+a) +Tr(AMAT )(Am + a)
(Ax+a)bT(CMDT +(Cm+c)(Dm+d)T) +(AMCT +(Am+a)(Cm+c)T)b(Dm+d)T +bT(Cm+c)(AMDT −(Am+a)(Dm+d)T)
Assume x ∈ Rn×1 is a random variable, w ∈ Rn×1 is a vector of constants and y is the linear combination y = wT x. Assume further that m, M2, M3, M4 denotes the mean, covariance, and central third and fourth moment matrix of the variable x. Then it holds that
⟨y⟩ = wT m ⟨(y − ⟨y⟩)2⟩ = wT M2w
⟨(y−⟨y⟩)3⟩ = wTM3w⊗w ⟨(y−⟨y⟩)4⟩ = wTM4w⊗w⊗w
(331) (332) (333) (334)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 36

7 MULTIVARIATE DISTRIBUTIONS
7 Multivariate Distributions 7.1 Cauchy
The density function for a Cauchy distributed vector t ∈ RP×1, is given by
Γ(1+P ) det(Σ)−1/2 p(t|μ, Σ) = π−P/2 2
(335)
Γ(1/2) 􏰃1+(t−μ)TΣ−1(t−μ)􏰄(1+P)/2
where μ is the location, Σ is positive definite, and Γ denotes the gamma func-
tion. The Cauchy distribution is a special case of the Student-t distribution.
7.2 Dirichlet
The Dirichlet distribution is a kind of “inverse” distribution compared to the multinomial distribution on the bounded continuous variate x = [x1,…,xP] [16, p. 44]
7.3 Normal
The normal distribution is also known as a Gaussian distribution. See sec. 8.
7.4 Normal-Inverse Gamma 7.5 Gaussian
See sec. 8.
7.6 Multinomial
If the vector n contains counts, i.e. (n)i ∈ 0, 1, 2, …, then the discrete multino- mial disitrbution for n is given by
(336)
(337)
where μ is the location, the scale matrix Σ is symmetric, positive definite, ν is the degrees of freedom, and Γ denotes the gamma function. For ν = 1, the Student-t distribution becomes the Cauchy distribution (see sec 7.1).
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 37
􏰆􏰒P 􏰇P
Γ pαp 􏰕αp−1
p(x|α) = 􏰓P Γ(αp) xp pp
n!d d P(n|a,n)= 􏰕ani, 􏰔ni =n
n1!…nd! i ii
where ai are probabilities, i.e. 0 ≤ ai ≤ 1 and 􏰒i ai = 1. 7.7 Student’s t
The density of a Student-t distributed vector t ∈ RP×1, is given by
Γ(ν+P ) det(Σ)−1/2 p(t|μ, Σ, ν) = (πν)−P/2 2
Γ(ν/2) 􏰃1 + ν−1(t − μ)TΣ−1(t − μ)􏰄(ν+P )/2

7.8 Wishart
7.7.1 Mean
7.7.2 Variance
7.7.3 Mode
7
E(t) = μ,
cov(t) = ν ν−2
MULTIVARIATE DISTRIBUTIONS
The notion mode meaning the position of the most probable value mode(t) = μ
7.7.4 Full Matrix Version
If instead of a vector t ∈ RP×1 one has a matrix T ∈ RP×N, then the Student-t
distribution for T is
p(T|M,Ω,Σ,ν) = π−NP/2􏰕P Γ[(ν+P−p+1)/2]×
p(M|Σ,m) =
7.8.1 Mean
1
2mP/2πP(P−1)/4 􏰓Pp Γ[12(m+1−p)]
×
p=1
Σ,
ν > 1
ν > 2
(338)
(339)
(340)
Γ [(ν − p + 1)/2] ν det(Ω)−ν/2 det(Σ)−N/2 ×
det 􏰃Ω−1 + (T − M)Σ−1(T − M)T􏰄−(ν+P )/2(341) where M is the location, Ω is the rescaling matrix, Σ is positive definite, ν is
the degrees of freedom, and Γ denotes the gamma function.
7.8 Wishart
The central Wishart distribution for M ∈ RP ×P , M is positive definite, where m can be regarded as a degree of freedom parameter [16, equation 3.8.1] [8, section 2.5],[11]
det(Σ)−m/2 det(M)(m−P −1)/2 × 􏰊1 −1 􏰋
exp − 2 Tr(Σ M) E(M) = mΣ
(342)
(343)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 38

7.9 Wishart, Inverse 7 MULTIVARIATE DISTRIBUTIONS
7.9 Wishart, Inverse
The (normal) Inverse Wishart distribution for M ∈ RP ×P , M is positive defi- nite, where m can be regarded as a degree of freedom parameter [11]
p(M|Σ,m) =
7.9.1 Mean
1
2mP/2πP(P−1)/4 􏰓Pp Γ[12(m+1−p)]
×
det(Σ)m/2 det(M)−(m−P −1)/2 × 􏰊1 −1􏰋
exp − 2 Tr(ΣM ) E(M) = Σ 1
(344)
(345)
m−P−1
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 39

1 􏰊1 T−1 􏰋
p(x) = 􏰀det(2πΣ) exp −2(x − m) Σ (x − m) (346)
Note that if x is d-dimensional, then det(2πΣ) = (2π)d det(Σ). Integration and normalization
􏰖􏰊1 T−1 􏰋 􏰀
exp −2(x−m) Σ (x−m) dx = det(2πΣ)
􏰖 􏰊1T−1 T−1􏰋 􏰀 􏰊1T−1􏰋 exp−2xΣ x+mΣ xdx = det(2πΣ)exp2mΣ m
􏰖􏰊1TT􏰋􏰀 􏰊1T−T􏰋 exp −2x Ax+c x dx = det(2πA−1)exp 2c A c
8 GAUSSIANS
8 Gaussians
8.1 Basics
8.1.1 Density and normalization The density of x ∼ N(m,Σ) is
If X = [x1x2…xn] and C = [c1c2…cn], then
􏰖􏰊1T T􏰋􏰀 −1n􏰊1T−1􏰋
exp −2Tr(X AX) + Tr(C X) dX = det(2πA ) exp 2Tr(C The derivatives of the density are
A C)
(347) (348)
(349)
(350) (351)
x= x
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 40
then
∂ p(x) ∂x
= −p(x)Σ−1(x − m)
∂2p = p(x)􏰆Σ−1(x − m)(x − m)T Σ−1 − Σ−1􏰇
∂x∂xT
8.1.2 Marginal Distribution
Assume x ∼ Nx(μ, Σ) where
􏰊 xa 􏰋 􏰊 μa 􏰋 􏰊 Σa Σc 􏰋
x= x
μ= μ Σ= ΣT Σ bbcb
p(xa) = Nxa (μa, Σa)
p(xb) = Nxb (μb, Σb)
8.1.3 Conditional Distribution Assume x ∼ Nx(μ, Σ) where
􏰊 xa 􏰋 􏰊 μa 􏰋 􏰊 Σa
Σc 􏰋 μ= μ Σ= ΣT Σ
(352)
bbcb

8.1 Basics
8 GAUSSIANS
􏰙μˆ =μ+ΣΣ−1(x−μ)
a a cb b b(353)
Σˆ a = Σ a − Σ c Σ − 1 Σ Tc b
(355)
(356)
(357)
then
p(x|x)=N(μˆ,Σˆ) a b x a a a
trix, see 9.1.5 for details.
8.1.4 Linear combination
Assume x ∼ N(mx,Σx) and y ∼ N(my,Σy) then
Ax+By+c∼N(Amx +Bmy +c,AΣxAT +BΣyBT) 8.1.5 Rearranging Means
􏰙 μˆ = μ + Σ T Σ − 1 ( x − μ )
b b c a a a (354)
p(xb|xa) = Nxb (μˆb, Σˆ b)
Note, that the covariance matrices are the Schur complement of the block ma-
􏰀det(2π(AT Σ−1A)−1) −1 NAx[m, Σ] = 􏰀det(2πΣ) Nx[A
m, (A
T
Σ
−1 −1 A) ]
Σˆ = Σ − Σ T Σ − 1 Σ bbcac
If A is square and invertible, it simplifies to
NAx[m, Σ] = 1 Nx[A−1m, (AT Σ−1A)−1]
| det(A)| 8.1.6 Rearranging into squared form
If A is symmetric, then
−12xTAx+bTx = −21(x−A−1b)TA(x−A−1b)+ 21bTA−1b
−12Tr(XT AX) + Tr(BT X) = −21Tr[(X − A−1B)T A(X − A−1B)] + 12Tr(BT A−1B) 8.1.7 Sum of two squared forms
(358) (359) (360)
(361) (362)
In vector formulation (assuming Σ1 , Σ2 are symmetric) −1(x−m )TΣ−1(x−m )
2111 −1(x−m )TΣ−1(x−m )
2222
= −1(x−m )TΣ−1(x−m )+C
2ccc m = (Σ−1 + Σ−1)−1(Σ−1m + Σ−1m )
Σ−1 = Σ−1+Σ−1 c12
c121122
C = 1(mT Σ−1 + mT Σ−1)(Σ−1 + Σ−1)−1(Σ−1m + Σ−1m )(363)
21122121122 −1􏰆mT Σ−1m + mT Σ−1m 􏰇 (364)
2111222
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 41

8.2 Moments
8 GAUSSIANS
(365) (366) (367)
(368) (369)
C = 1Tr􏰗(Σ−1M + Σ−1M )T (Σ−1 + Σ−1)−1(Σ−1M + Σ−1M )􏰘 21122121122
In a trace formulation (assuming Σ1,Σ2 are symmetric) −1Tr((X−M )TΣ−1(X−M ))
2111 −1Tr((X−M )TΣ−1(X−M ))
2222
= −1Tr[(X−M )TΣ−1(X−M )]+C
2ccc M = (Σ−1 + Σ−1)−1(Σ−1M + Σ−1M )
Σ−1 = Σ−1+Σ−1 c12
c121122
−1Tr(MT Σ−1M + MT Σ−1M ) 2111222
8.1.8 Product of gaussian densities Let Nx(m,Σ) denote a density of x, then
Nx(m1, Σ1) · Nx(m2, Σ2) = ccNx(mc, Σc) cc = Nm1 (m2, (Σ1 + Σ2))
(370)
(371)
1 􏰊1 T −1 􏰋
= 􏰀det(2π(Σ1 + Σ2)) exp −2(m1 − m2) (Σ1 + Σ2) m = (Σ−1 + Σ−1)−1(Σ−1m + Σ−1m )
(m1 − m2)
c121122 Σ = (Σ−1+Σ−1)−1
c12
but note that the product is not normalized as a density of x.
8.2 Moments
8.2.1 Mean and covariance of linear forms First and second moments. Assume x ∼ N (m, Σ)
E(x) = m
Cov(x, x) = Var(x) = Σ = E(xxT ) − E(x)E(xT ) = E(xxT ) − mmT As for any other distribution is holds for gaussians that
E [Ax] = AE [x] Var[Ax] = AVar[x]AT
Cov[Ax, By] = ACov[x, y]BT
(372)
(373)
(374) (375) (376)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 42

8.2 Moments
8.2.2 Mean and variance of square forms
Mean and variance of square forms: Assume x ∼ N (m, Σ)
8 GAUSSIANS
(377) (378)
(379) (380)
(381)
(382)
E[(x−m′)TA(x−m′)] = If Σ = σ2I and A is symmetric, then
+mT(A+AT)Σ(A+AT)m (m−m′)TA(m−m′)+Tr(AΣ)
E(xxT ) E[xT Ax] Var(xT Ax)
= Σ+mmT
= Tr(AΣ) + mT Am
= Tr[AΣ(A + AT )Σ] + …
Var(xT Ax) = 2σ4Tr(A2) + 4σ2mT A2m Assume x ∼ N(0,σ2I) and A and B to be symmetric, then
Cov(xT Ax, xT Bx) = 2σ4Tr(AB)
8.2.3 Cubic forms
Assume x to be a stochastic vector with independent coordinates, mean m and
covariance M
E[xbTxxT] = mbT(M+mmT)+(M+mmT)bmT
8.2.4
+bT m(M − mmT ) Mean of Quartic Forms
(383)
E[xxTxxT] = E[xxTAxxT] =
2(Σ+mmT)2 +mTm(Σ−mmT)
+Tr(Σ)(Σ + mmT ) (Σ+mmT)(A+AT)(Σ+mmT) +mTAm(Σ−mmT)+Tr[AΣ](Σ+mmT) 2Tr(Σ2) + 4mT Σm + (Tr(Σ) + mT m)2 Tr[AΣ(B+BT)Σ]+mT(A+AT)Σ(B+BT)m +(Tr(AΣ) + mT Am)(Tr(BΣ) + mT Bm)
E[xT xxT x] = E[xTAxxTBx] =
E[aT xbT xcT xdT x]
= (aT(Σ+mmT)b)(cT(Σ+mmT)d)
+(aT (Σ + mmT )c)(bT (Σ + mmT )d) +(aT(Σ+mmT)d)(bT(Σ+mmT)c)−2aTmbTmcTmdTm
E[(Ax+a)(Bx+b)T(Cx+c)(Dx+d)T]
= [AΣBT +(Am+a)(Bm+b)T][CΣDT +(Cm+c)(Dm+d)T]
+[AΣCT +(Am+a)(Cm+c)T][BΣDT +(Bm+b)(Dm+d)T] +(Bm+b)T(Cm+c)[AΣDT −(Am+a)(Dm+d)T] +Tr(BΣCT)[AΣDT +(Am+a)(Dm+d)T]
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 43

8.3 Miscellaneous 8 GAUSSIANS
E[(Ax+a)T(Bx+b)(Cx+c)T(Dx+d)] = Tr[AΣ(CT D + DT C)ΣBT ]
+[(Am+a)TB+(Bm+b)TA]Σ[CT(Dm+d)+DT(Cm+c)] +[Tr(AΣBT ) + (Am + a)T (Bm + b)][Tr(CΣDT ) + (Cm + c)T (Dm + d)]
See [7].
8.2.5 Moments
E[x] = 􏰔ρkmk
(384) (385)
Cov(x) =
8.3 Miscellaneous
k
􏰔 􏰔 ρkρk′ (Σk + mkmTk − mkmTk′ )
k k′
8.3.1 Whitening Assume x ∼ N (m, Σ) then
z = Σ−1/2(x − m) ∼ N (0, I)
Conversely having z ∼ N (0, I) one can generate data x ∼ N (m, Σ) by setting
x = Σ1/2z + m ∼ N (m, Σ) (387) Note that Σ1/2 means the matrix which fulfils Σ1/2Σ1/2 = Σ, and that it exists
and is unique since Σ is positive definite.
8.3.2 The Chi-Square connection
Assume x ∼ N (m, Σ) and x to be n dimensional, then
z = (x − m)T Σ−1(x − m) ∼ χ2n
where χ2n denotes the Chi square distribution with n degrees of freedom.
8.3.3 Entropy
Entropy of a D-dimensional gaussian 􏰖􏰀D
H(x)=− N(m,Σ)lnN(m,Σ)dx=ln det(2πΣ)+ 2
8.4 Mixture of Gaussians
(388)
(389)
(386)
8.4.1 Density
The variable x is distributed as a mixture of gaussians if it has the density
􏰔K 1 􏰊1 T−1 􏰋
ρk 􏰀det(2πΣk) exp −2(x − mk) Σk (x − mk) (390) where ρk sum to 1 and the Σk all are positive definite.
p(x) =
k=1
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 44

8.4 Mixture of Gaussians
8.4.2 Derivatives
Defining p(s) = 􏰒k ρkNs(μk,Σk) one get
∂ ln p(s) ρjNs(μj,Σj) ∂
∂ρ = 􏰒 ρ N (μ ,Σ )∂ρ ln[ρjNs(μj,Σj)]
j kkskkj
=
∂ ln p(s)
∂μ =
8 GAUSSIANS
(391)
(392)
(393)
(394)
(395)
ρjNs(μj,Σj) 1 􏰒ρN(μ,Σ)ρ
kkskkj
ρjNs(μj,Σj) ∂
􏰒 ρ N (μ ,Σ )∂μ ln[ρjNs(μj,Σj)]
j kkskkj ρjNs(μj,Σj) 􏰃 −1 􏰄
= 􏰒k ρkNs(μk,Σk) Σj (s − μj)
∂lnp(s) ρjNs(μj,Σj) ∂
∂Σ = 􏰒 ρ N (μ ,Σ )∂Σ ln[ρjNs(μj,Σj)]
j kkskkj ρjNs(μj,Σj) 1􏰃 −1 −1
T −1􏰄
= 􏰒k ρkNs(μk,Σk) 2 −Σj + Σj (s − μj)(s − μj) Σj (396)
But ρk and Σk needs to be constrained.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 45

9 Special Matrices 9.1 Block matrices
Let Aij denote the ijth block of A. 9.1.1 Multiplication
Assuming the dimensions of the blocks matches we have
􏰊 A11 A12 􏰋􏰊 B11 B12 􏰋 􏰊 A11B11 +A12B21 A11B12 +A12B22 􏰋 AABB=AB+ABAB+AB
as
9.1.4
C = A −A A−1A
2 22211112
􏰊AA􏰋−1􏰊 C−1 −A−1AC−1􏰋 1112=1 11122
=
􏰊 A−1 + A−1A C−1A 11 11 12 2
−A−1A C−1 22 21 1
A−1 21 11
−C−1A 1
A−1 12 22
􏰋
A−1 12 22
A21 A22 −C−1 A21 A−1 C−1 2 11 2
9 SPECIAL MATRICES
9.1.2
The Determinant
21 22 21 22 21 11 22 21
21 12
22 22
The determinant can be expressed as by the use of C = A −A A−1A
(397) (398)
(399) (400)
as
9.1.3
􏰈􏰊 A11
A12 􏰋􏰉
A = det(A22) · det(C1) = det(A11) · det(C2)
det A The Inverse
1 11122221
C = A −A A−1A
2 22211112
21
22
The inverse can be expressed as by the use of
C = A −A A−1A
1 11122221
A−1 + A−1A
22 22 21 1
C−1A
Block diagonal
For block diagonal matrices we have
􏰊A110􏰋−1 􏰊(A11)−1 0􏰋
(401) (402)
det
􏰈􏰊A11 0􏰋􏰉
0 A
= det(A11) · det(A22)
0A = 0(A)−1 22 22
22
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 46

9.2 Discrete Fourier Transform Matrix, The 9 SPECIAL MATRICES
9.1.5 Schur complement
Regard the matrix
􏰊 A11 A21
A12 􏰋−1 A22
􏰊A11 A12􏰋 A21 A22
The Schur complement of block A11 of the matrix above is the matrix (denoted
C2 in the text above)
The Schur complement of block A22 of the matrix above is the matrix (denoted
A −A A−1A 22 211112
C1 in the text above)
Using the Schur complement, one can rewrite the inverse of a block matrix
A −A A−1A 11 122221
0􏰋􏰊(A −AA−1A)−1 0 􏰋􏰊I −AA−1􏰋 11 122221 1222
􏰊 I
−A−1AI 0 A−10I
=
The Schur complement is useful when solving linear systems of the form
22 21 22
􏰊A11 A12 􏰋􏰊x1 􏰋 􏰊b1 􏰋 AAx=b
21 22 2 2 which has the following equation for x1
(A −A A−1A )x =b −A A−1b 11 12 22 21 1 1 12 22 2
When the appropriate inverses exists, this can be solved for x1 which can then be inserted in the equation for x2 to solve for x2.
9.2 Discrete Fourier Transform Matrix, The
The DFT matrix is an N × N symmetric matrix WN , where the k, nth element is given by
Wkn = e−j2πkn
(403)
N
Thus the discrete Fourier transform (DFT) can be expressed as
n=0
1 N−1
x(n) = 􏰔 X(k)W−kn. (405)
TheDFTofthevectorx=[x(0),x(1),···,x(N−1)]T canbewritteninmatrix form as
X = WN x, (406) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 47
N
N−1
X(k) = 􏰔 x(n)Wkn.
(404) Likewise the inverse discrete Fourier transform (IDFT) can be expressed as
NN k=0
N

9.3 Hermitian Matrices and skew-Hermitian 9 SPECIAL MATRICES
where X = [X(0), X(1), · · · , x(N − 1)]T . The IDFT is similarly given as x = W−1X.
(407)
(408)
(409) (410)
(411)
Some properties of WN exist:
−j 2π IfWN =e N
,then[23]
N
W−1 = 1W∗ NNN
WNW∗N = NI W∗N = WHN
Wm+N/2 =−Wm NN
Notice, the DFT matrix is a Vandermonde Matrix.
The following important relation between the circulant matrix and the dis-
(412)
crete Fourier transform (DFT) exists
TC =W−1(I◦(WNt))WN,
N
where t = [t0,t1,··· ,tn−1]T is the first row of TC.
9.3 Hermitian Matrices and skew-Hermitian
A matrix A ∈ Cm×n is called Hermitian if AH = A
For real valued matrices, Hermitian and symmetric matrices are equivalent.
Note that
where B, C are hermitian, then
A = B + iC
A is Hermitian ⇔ xHAx ∈ R, ∀x ∈ Cn×1 A is Hermitian ⇔ eig(A) ∈ R
(413) (414)
A+AH A−AH B= 2 , C= 2i
9.3.1 Skew-Hermitian
A matrix A is called skew-hermitian if
A = −AH
For real valued matrices, skew-Hermitian and skew-symmetric matrices are
equivalent.
A Hermitian ⇔ A skew-Hermitian ⇔ A skew-Hermitian ⇒
iA is skew-hermitian xH Ay = −xH AH y, eig(A) = iλ, λ ∈ R
∀x, y
(415) (416) (417)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 48

9.4 Idempotent Matrices 9 SPECIAL MATRICES
9.4 Idempotent Matrices
A matrix A is idempotent if
Idempotent matrices A and B, have the following properties
An I−A
AH
I−AH If AB = BA rank(A) A(I−A)
A+ = A
f(sI+tA) = (I−A)f(s)+Af(s+t)
Note that A − I is not necessarily idempotent. 9.4.1 Nilpotent
=0 (I−A)A = 0
A matrix A is nilpotent if
A nilpotent matrix has the following property:
=
A, forn = 1,2,3,… is idempotent
is idempotent
is idempotent
(418) (419) (420) (421) (422) (423) (424) (425) (426) (427)
(428)
(429)

AA = A
AB = Tr(A)
is idempotent
A2 = 0
f(sI + tA) = If(s) + tAf′(s)
9.4.2 Unipotent
A matrix A is unipotent if
AA = I
A unipotent matrix has the following property:
f(sI+tA) = [(I+A)f(s+t)+(I−A)f(s−t)]/2 9.5 Orthogonal matrices
If a square matrix Q is orthogonal, if and only if, QTQ=QQT =I
and then Q has the following properties
• Its eigenvalues are placed on the unit circle.
• Its eigenvectors are unitary, i.e. have length one.
• The inverse of an orthogonal matrix is orthogonal too.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 49

9.6 Positive Definite and Semi-definite Matrices
Basic properties for the orthogonal matrix Q
Q−1 = QT Q−T = Q
QQT = I
QTQ = I det(Q) = ±1
9 SPECIAL MATRICES
9.5.1 Ortho-Sym
A matrix Q+ which simultaneously is orthogonal and symmetric is called an
ortho-sym matrix [20]. Hereby
QT+Q+ = I Q+ = QT+
(430) (431)
(432) (433)
The powers of an ortho-sym matrix are given by the following rule k 1 + (−1)k 1 + (−1)k+1
22
9.5.2 Ortho-Skew
A matrix which simultaneously is orthogonal and antisymmetric is called an ortho-skew matrix [20]. Hereby
Q+ = 2 I+ 2 Q+ = 1 + cos(kπ)I + 1 − cos(kπ)Q+
QH− Q− = I
Q− = −QH−
The powers of an ortho-skew matrix are given by the following rule
(434) (435)
(436) (437)
(438)
(439) (440)
k
ik + (−i)k ik − (−i)k
2 I−i 2 Q−
Q− = =
cos(k π2 )I + sin(k π2 )Q−
A square matrix A can always be written as a sum of a symmetric A+ and an
9.5.3 Decomposition antisymmetric matrix A−
A=A+ +A−
9.6 Positive Definite and Semi-definite Matrices
9.6.1 Definitions
A matrix A is positive definite if and only if
xTAx>0, ∀x̸=0 A matrix A is positive semi-definite if and only if
xT Ax ≥ 0, ∀x
Note that if A is positive definite, then A is also positive semi-definite.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 50

9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES
9.6.2 Eigenvalues
The following holds with respect to the eigenvalues:
9.6.3 Trace
A pos. def. ⇔ eig(A+AH ) > 0
2 H (441)
A pos. semi-def. ⇔ eig(A+A ) ≥ 0 2
The following holds with respect to the trace:
A pos. def. ⇒ Tr(A) > 0
A pos. semi-def. ⇒ Tr(A) ≥ 0
(442)
9.6.4 Inverse
If A is positive definite, then A is invertible and A−1 is also positive definite.
9.6.5 Diagonal
If A is positive definite, then Aii > 0, ∀i
9.6.6 Decomposition I
The matrix A is positive semi-definite of rank r ⇔ there exists a matrix B of
rank r such that A = BBT
The matrix A is positive definite ⇔ there exists an invertible matrix B such
that A = BBT
9.6.7 Decomposition II
Assume A is an n × n positive semi-definite, then there exists an n × r matrix B of rank r such that BT AB = I.
9.6.8 Equation with zeros
Assume A is positive semi-definite, then XT AX = 0 ⇒ AX = 0
9.6.9 Rank of product
Assume A is positive definite, then rank(BABT ) = rank(B)
9.6.10 Positive definite property
If A is n×n positive definite and B is r×n of rank r, then BABT is positive
definite.
9.6.11 Outer Product
If X is n × r, where n ≤ r and rank(X) = n, then XXT is positive definite.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 51

9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
9.6.12 Small pertubations
If A is positive definite and B is symmetric, then A − tB is positive definite for
sufficiently small t.
9.6.13 Hadamard inequality
If A is a positive definite or semi-definite matrix, then
det(A) ≤ 􏰕 Aii i
See [15, pp.477]
9.6.14 Hadamard product relation
Assume that P = AAT and Q = BBT are semi positive definite matrices, it
then holds that
P ◦ Q = RRT
where the columns of R are constructed as follows: ri+(j −1)NA = ai ◦ bj , for i = 1, 2, …, NA and j = 1, 2, …, NB . The result is unpublished, but reported by Pavel Sakov and Craig Bishop.
9.7 Singleentry Matrix, The
9.7.1 Definition
The single-entry matrix Jij ∈ Rn×n is defined as the matrix which is zero everywhere except in the entry (i, j) in which it is 1. In a 4 × 4 example one might have
0000
J23=0 0 1 0 (443)
 0 0 0 0  0000
The single-entry matrix is very useful when working with derivatives of expres- sions involving matrices.
9.7.2 Swap and Zeros AssumeAtoben×mandJij tobem×p
AJij=􏰃0 0 … Ai … 0􏰄 (444)
i.e. an n × p matrix of zeros with the i.th column of A in place of the j.th column. Assume A to be n×m and Jij to be p×n
0  . 

 0 
JijA=Aj 
 0 
 .  .
0
(445)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 52

9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
i.e. an p × m matrix of zeros with the j.th row of A in the placed of the i.th row.
9.7.3 Rewriting product of elements AkiBjl = (AeieTj B)kl =
AikBlj = (AT eieTj BT )kl = AikBjl = (AT eieTj B)kl = AkiBlj = (AeieTj BT )kl =
(AJij B)kl (AT JijBT )kl
(AT JijB)kl (AJijBT )kl
(446) (447) (448) (449)
9.7.4 Properties of the Singleentry Matrix
If i = j
If i ̸= j
JijJij = Jij Jij(Jij)T = Jij
JijJij = 0 Jij(Jij)T = Jii
(Jij)T (Jij)T = Jij (Jij)T Jij = Jij
(Jij)T (Jij)T = 0 (Jij)T Jij = Jjj
9.7.5 The Singleentry Matrix in Scalar Expressions Assume A is n×m and J is m×n, then
Tr(AJij ) = Tr(Jij A) = (AT )ij Assume A is n × n, J is n × m and B is m × n, then
(450)
(451) (452) (453)
(454) (455)
(456)
(457)
(458)
If A has no special structure then
If A is symmetric then
Sij = Jij
Sij =Jij +Jji −JijJij
Tr(AJij B)
Tr(AJj i B) Tr(AJij Jij B)
= (AT BT )ij
= (BA)ij
= diag(AT BT )ij
Assume A is n×n, Jij is n×m B is m×n, then
xT AJij Bx = (AT xxT BT )ij
xT AJij Jij Bx = diag(AT xxT BT )ij 9.7.6 Structure Matrices
The structure matrix is defined by
∂A =Sij ∂ Aij
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 53

9.8 Symmetric, Skew-symmetric/Antisymmetric 9 SPECIAL MATRICES
9.8 Symmetric, Skew-symmetric/Antisymmetric
9.8.1 Symmetric
The matrix A is said to be symmetric if
A = AT (459) Symmetric matrices have many important properties, e.g. that their eigenvalues
are real and eigenvectors orthogonal.
9.8.2 Skew-symmetric/Antisymmetric
The antisymmetric matrix is also known as the skew symmetric matrix. It has the following property from which it is defined
A = −AT (460) Hereby, it can be seen that the antisymmetric matrices always have a zero
diagonal. The n × n antisymmetric matrices also have the following properties. det(AT ) = det(−A) = (−1)n det(A) (461)
− det(A) = det(−A) = 0, if n is odd (462) The eigenvalues of an antisymmetric matrix are placed on the imaginary axis
and the eigenvectors are unitary.
9.8.3 Decomposition
A square matrix A can always be written as a sum of a symmetric A+ and an
antisymmetric matrix A−
Such a decomposition could e.g. be
A+AT A−AT
A = 2 + 2 = A+ + A−
9.9 Toeplitz Matrices
(463)
(464)
A=A+ +A−
A Toeplitz matrix T is a matrix where the elements of each diagonal is the same. In the n × n square case, it has the following structure:
 t11 t12 ··· t1n   t0 t1 ··· tn−1  t … … .   t … … . 
T=21 =−1   . .. ..   . .. .. 
(465)
A Toeplitz matrix is persymmetric. If a matrix is persymmetric (or orthosym- metric), it means that the matrix is symmetric about its northeast-southwest diagonal (anti-diagonal) [12]. Persymmetric matrices is a larger class of matri- ces, since a persymmetric matrix not necessarily has a Toeplitz structure. There
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 54
…t12. ..t1 tn1 ··· t21 t11 t−(n−1) ··· t−1 t0

9.10 Transition matrices 9 SPECIAL MATRICES
are some special cases of Toeplitz matrices. The symmetric Toeplitz matrix is given by:
9.9.1 Properties of Toeplitz Matrices
 t0 t1 ··· tn−1   t … … . 
T= 1
 . . . t1 
tn−1 ··· t1 t0 t0 t1 ···tn−1
t … … .  TC=n−1 
 . .. ..   . . . t1 
t1 ··· tn−1 t0 The upper triangular Toeplitz matrix:
t0 t1 ···tn−1
The circular Toeplitz matrix:
  . .. .. 
(466)
(467)
(468)
(469)
0…… . TU = ,
0 ··· 0 t0 and the lower triangular Toeplitz matrix:
t0 0···0  t … … .
 . … … t1  
TL = −1  . .. .. 
…0 t−(n−1) · · · t−1 t0
The Toeplitz matrix has some computational advantages. The addition of two Toeplitz matrices can be done with O(n) flops, multiplication of two Toeplitz matrices can be done in O(n ln n) flops. Toeplitz equation systems can be solved in O(n2) flops. The inverse of a positive definite Toeplitz matrix can be found in O(n2) flops too. The inverse of a Toeplitz matrix is persymmetric. The product of two lower triangular Toeplitz matrices is a Toeplitz matrix. More information on Toeplitz matrices and circulant matrices can be found in [13, 7].
9.10 Transition matrices
A square matrix P is a transition matrix, also known as stochastic matrix or probability matrix, if
0≤(P)ij ≤1, 􏰔(P)ij =1 j
The transition matrix usually describes the probability of moving from state i to j in one step and is closely related to markov processes. Transition matrices
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 55

9.11 Units, Permutation and Shift 9
have the following properties
Prob[i → j in 1 step] = (P)ij Prob[i → j in 2 steps] = (P2 )ij Prob[i → j in k steps] = (Pk )ij
If all rows are identical ⇒ Pn=P
αP = α, α is called invariant
where α is a so-called stationary probability vector, i.e., 0 ≤ αi ≤ 1 and 􏰒i αi = 1.
9.11 Units, Permutation and Shift
9.11.1 Unit vector
Let ei ∈ Rn×1 be the ith unit vector, i.e. the vector which is zero in all entries
except the ith at which it is 1. 9.11.2 Rows and Columns
i.throwofA = eTiA j.th column of A = Aej
9.11.3 Permutations
Let P be some permutation matrix, e.g.
(475) (476)
(477)
(478)
(479)
 e T2  P=1 0 0=􏰃e2 e1 e3 􏰄=eT1 
001 eT3 For permutation matrices it holds that
and that
PPT =I AP = 􏰃 Ae2 Ae1 Ae3 􏰄
 e T2 A  PA =  eT1 A 
e T3 A
example by
 0 1 0 
That is, the first is a matrix which has columns of A but in permuted sequence and the second is a matrix which has the rows of A but in the permuted se- quence.
9.11.4 Translation, Shift or Lag Operators
Let L denote the lag (or ’translation’ or ’shift’) operator defined on a 4 × 4
0000
L=1 0 0 0 (480)
 0 1 0 0  0010
SPECIAL MATRICES
(470) (471) (472) (473) (474)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 56

9.12 Vandermonde Matrices 9 SPECIAL MATRICES
i.e. a matrix of zeros with one on the sub-diagonal, (L)ij = δi,j+1. With some signal xt for t = 1,…,N, the n.th power of the lag operator shifts the indices, i.e.
(Lnx)t =􏰙 0 for t=1,..,n (481) xt−n for t=n+1,…,N
A related but slightly different matrix is the ’recurrent shifted’ operator defined on a 4×4 example by
0001
1000 0100 0010
Lˆ = 
i.e. a matrix defined by (Lˆ)ij = δi,j+1 + δi,1δj,dim(L). On a signal x it has the
effect
(Lˆnx)t =xt′, t′ =[(t−n) modN]+1 (483)
That is, Lˆ is like the shift operator L except that it ’wraps’ the signal as if it was periodic and shifted (substituting the zeros with the rear end of the signal). Note that Lˆ is invertible and orthogonal, i.e.
Lˆ−1 =LˆT 9.12 Vandermonde Matrices
A Vandermonde matrix has the form [15]
1v v2 ···vn−1
(484)
(485)
111
1v v2 ···vn−1 222
V= . . . . . ….
1 v v2 ··· vn−1 nnn
The transpose of V is also said to a Vandermonde matrix. The determinant is given by [29]
detV=􏰕(vi −vj) (486) i>j
 (482)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 57

10 10.1
10.1.1
10 FUNCTIONS AND OPERATORS
Functions and Operators
Functions and Series
Finite Series
(Xn −I)(X−I)−1 =I+X+X2 +…+Xn−1 (487)
10.1.2
Consider some scalar function f(x) which takes the vector x as an argument.
Taylor Expansion of Scalar Function
This we can Taylor expand around x0
f(x) ∼= f(x0) + g(x0)T (x − x0) + 12(x − x0)T H(x0)(x − x0)
where
10.1.3
(488)
∂f(x)􏰅 g ( x 0 ) = 􏰅􏰅
∂2f(x)􏰅 T 􏰅􏰅
H ( x 0 ) = Matrix Functions by Infinite Series
∂x∂x x0
∂x x0
As for analytical functions in one dimension, one can define a matrix function for square matrices X by an infinite series

f(X) = 􏰔 cnXn (489) n=0
assuming the limit exists and is finite. If the coefficients cn fulfils 􏰒n cnxn < ∞, then one can prove that the above series exists and is finite, see [1]. Thus for any analytical function f(x) there exists a corresponding matrix function f(x) constructed by the Taylor expansion. Using this one can prove the following results: 1) A matrix A is a zero of its own characteristic polynomium [1]: p(λ) = det(Iλ − A) = 􏰔 cnλn ⇒ p(A) = 0 n 2) If A is square it holds that [1] A = UBU−1 ⇒ f(A) = Uf(B)U−1 3) A useful fact when using power series is that An →0forn→∞ if |A|<1 10.1.4 Identity and commutations It holds for an analytical matrix function f(X) that f(AB)A = Af(BA) see B.1.2 for a proof. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 58 (490) (491) (492) (493) 10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS 10.1.5 Exponential Matrix Function In analogy to the ordinary scalar exponential function, one can define exponen- tial and logarithmic matrix functions: (494) (495) (496) (497) (498) (499) (500) (501) (502) (503) (504) 10.1.6 10.2 det(eA ) Trigonometric Functions sin(A) ≡ cos(A) ≡ 5 eA ≡ 􏰔∞ 1An=I+A+1A2+... n! 2 e−A ≡ 􏰔∞ 1(−1)nAn=I−A+1A2−... n=0 n! 2 etA ≡ 􏰔∞ 1(tA)n=I+tA+1t2A2+... n=0 n! 2 􏰔∞(−1)n−1 11 n=0 ln(I+A) ≡ Some of the properties of the exponential function are [1] n An=A−2A2+3A3−... = eA+B if AB=BA eA eB (eA )−1 d etA dt dTr(etA) dt = e−A = AetA =etAA, = Tr(AetA ) = eTr(A) t∈R n=1 􏰔∞ (−1)n A2n+1 1 3 1 (2n+1)! =A−3!A +5!A −... n=0 􏰔∞ (−1)nA2n 1 2 1 4 n=0 (2n)! =I−2!A +4!A −... Kronecker and Vec Operator The Kronecker Product mr×nq matrix, A⊗B defined as  A11B A12B ... A1nB   A21B A22B ... A2nB  A ⊗ B =  . .  . . Am1B Am2B ... AmnB 10.2.1 The Kronecker product of an m×n matrix A and an r×q matrix B, is an (505) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 59 10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS The Kronecker product has the following properties (see [19]) A⊗(B+C) = A⊗B ̸= A⊗(B⊗C) = (αAA⊗αBB) = (A⊗B)T = (A⊗B)(C⊗D) = (A ⊗ B)−1 = (A⊗B)+ = rank(A ⊗ B) = Tr(A ⊗ B) = det(A ⊗ B) = {eig(A ⊗ B)} = {eig(A ⊗ B)} = eig(A ⊗ B) = A⊗B+A⊗C B ⊗ A in general (A⊗B)⊗C αAαB(A⊗B) AT ⊗BT AC⊗BD A−1 ⊗ B−1 A+ ⊗ B+ rank(A)rank(B) Tr(A)Tr(B) = Tr(ΛA ⊗ ΛB ) det(A)rank(B) det(B)rank(A) {eig(B ⊗ A)} if A, B are square {eig(A)eig(B)T } if A, B are symmetric and square eig(A) ⊗ eig(B) (506) (507) (508) (509) (510) (511) (512) (513) (514) (515) (516) (517) (518) (519) Where {λi} denotes the set of values λi, that is, the values in no particular order or structure, and ΛA denotes the diagonal matrix with the eigenvalues of A. 10.2.2 The Vec Operator The vec-operator applied on a matrix A stacks the columns into a vector, i.e. for a 2×2 matrix 􏰊 A11 A12 􏰋 A= A A Properties of the vec-operator include (see [19]) vec(AXB) = Tr(AT B) = vec(A + B) = vec(αA) = aT XBXT c = See B.1.1 for a proof for Eq. 524. (BT ⊗ A)vec(X) vec(A)T vec(B) vec(A) + vec(B) α · vec(A) vec(X)T (B ⊗ caT )vec(X) (520) (521) (522) (523) (524) 21 22  A11   A21  vec(A)=A  12 A22 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 60 10.3 Vector Norms 10 FUNCTIONS AND OPERATORS 10.3 Vector Norms 10.3.1 Examples ||x||∞ = Further reading in e.g. [12, p. 52] 10.4 Matrix Norms 10.4.1 Definitions max |xi| i A matrix norm is a mapping which fulfils ||A|| ≥ 0 ||A|| = 0⇔A=0 ||cA|| = |c|||A||, ||A + B|| ≤ ||A|| + ||B|| 10.4.2 Induced Norm or Operator Norm c ∈ R ||x||1 = 􏰔 |xi| i ||x||2 = xHx 􏰎 ||x||p = 􏰔|xi|p i 􏰏1/p (525) (526) (527) (528) (529) (530) (531) (532) An induced norm is a matrix norm induced by a vector norm by the following ||A|| = sup{||Ax|| | ||x|| = 1} (533) where || · || on the left side is the induced matrix norm, while || · || on the right side denotes the vector norm. For induced norms it holds that ||I|| = 1 ||Ax|| ≤ ||A|| · ||x||, ||AB|| ≤ ||A|| · ||B||, 10.4.3 Examples for all A, x for all A, B (534) (535) (536) (537) (538) (539) (540) (541) ||A||1 = ||A||2 = ||A||p = ||A||∞ = ||A||F = max􏰔|Aij| j i 􏰚 max eig(AH A) ( max ||Ax||p )1/p ||x||p =1 max􏰔|Aij| i j 􏰜􏰔 2 􏰚 H |Aij | = Tr(AA ij ) (Frobenius) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 61 10.5 Rank 10 FUNCTIONS AND OPERATORS (542) (Ky Fan) (543) E. H. Rasmussen has in yet unpublished material derived and collected the following inequalities. They are collected in a table as below, assuming A is an m × n, and d = rank(A) ||A||max = ||A||KF = max |Aij | ij ||sing(A)||1 where sing(A) is the vector of singular values of the matrix A. 10.4.4 Inequalities ||A||max ||A||1 ||A||∞ ||A||2 ||A||F ||A||KF ||A||max 11111 ||A||1 m m √m √m √m ||A||∞ n n √n √n √n ||A|| √mn √n √m 1 1 2√√√√ ||A||F √mn √n √m d √ 1 ||A||KF mnd nd which are to be read as, e.g. √ md d m · ||A||∞ d ||A||2 ≤ 10.4.5 Condition Number (544) The 2-norm of A equals 􏰀(max(eig(AT A))) [12, p.57]. For a symmetric, pos- itive definite matrix, this reduces to max(eig(A)) The condition number based on the 2-norm thus reduces to ∥A∥2∥A−1∥2 = max(eig(A))max(eig(A−1)) = max(eig(A)). min(eig(A)) 10.5 Rank 10.5.1 Sylvester’s Inequality If A is m × n and B is n × r, then rank(A) + rank(B) − n ≤ rank(AB) ≤ min{rank(A), rank(B)} 10.6 Integral Involving Dirac Delta Functions Assuming A to be square, then 􏰖 1−1 (545) (546) (547) (548) p(s)δ(x − As)ds = det(A) p(A x) Assuming A to be ”underdetermined”, i.e. ”tall”, then 􏰖 See [9]. p(s)δ(x − As)ds = 􏰐√ 1 det(AT A) 0 p(A+x) ifx=AA+x􏰑 elsewhere Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 62 10.7 Miscellaneous 10 FUNCTIONS AND OPERATORS 10.7 Miscellaneous For any A it holds that rank(A) = rank(AT ) = rank(AAT ) = rank(AT A) (549) It holds that A is positive definite ⇔ ∃B invertible, such that A = BBT (550) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 63 A One-dimensional Results A.1 Gaussian A.1.1 Density 1 p(x) = √2πσ2 exp 􏰈 (x−μ)2􏰉 A ONE-DIMENSIONAL RESULTS − 2σ2 (551) (552) (553) (554) (555) (556) (557) (558) A.1.2 Normalization 􏰖 􏰖 A.1.3 Derivatives 􏰛π dx= aexp 4a 􏰖 −(s−μ)2 √ e 2σ2 ds = 2πσ2 −(ax2+bx+c) 2 e ec2x +c1x+c0dx = 􏰊b2 −4ac􏰋 􏰊c2 −4c c 􏰋 􏰛 π −c2 ∂p(x) = p(x)(x − μ) ∂μ σ2 2 0 −4c2 exp 1 ∂lnp(x) = (x−μ) ∂μ σ2 ∂p(x) 1 􏰊(x−μ)2 ∂σ =p(x)σ σ2 −1 ∂lnp(x) 1 􏰊(x−μ)2 􏰋 ∂σ =σ σ2 −1 A.1.4 Completing the Squares 􏰋 or A.1.5 Moments 2 c2 4 c2 c2x2 +c1x+c0 =− 1 (x−μ)2 +d If the density is expressed by 1 􏰊 (s−μ)2􏰋 2 or p(x) = C exp(c2x + c1x) p(x) = √2πσ2 exp − 2σ2 then the first few basic moments are (559) c2x2 +c1x+c0 =−a(x−b)2 +w − a = c 2 b = 1 c 1 w = 1 c 21 + c 0 2σ2 μ = − c 1 σ 2 = − 1 d = c 0 − c 21 4c2 2c2 2c2 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 64 A.2 One Dimensional Mixture of GaussiAansONE-DIMENSIONAL RESULTS A.2 A.2.1 ⟨x⟩ = μ = −c1 2c2 ⟨x2⟩ = σ2+μ2 ⟨x3⟩ = 3σ2μ+μ3 = −1+􏰆−c1􏰇2 4 4 2 2 4 ⟨x⟩=μ+6μσ+3σ = 2c2 +62c2 2c2 +32c2 and the central moments are ⟨(x−μ)⟩=0 =0 ⟨(x − μ)2⟩ = σ2 = 􏰗 −1 􏰘 2c2 ⟨(x−μ)3⟩=0 =0 ⟨(x−μ)4⟩=3σ4 =3􏰗1􏰘2 2c2 A kind of pseudo-moments (un-normalized integrals) can easily be derived as 􏰖 2 n n􏰛π􏰊c21􏰋n exp(c2x +c1x)x dx=Z⟨x ⟩= −c exp −4c ⟨x ⟩ 22 ¿From the un-centralized moments one can derive other entities like (560) = 2 σ 4 + 4 μ 2 σ 2 = 2 (2c2 )2 􏰗 1 − 4 c 21 2c2 􏰘 p(s)= 􏰀2πσ2 exp −2 kk σk2 (561) (562) 2c2 2c2 􏰗3− c21 􏰘 = c1 (2c2 )2 2c2 􏰆c1 􏰇4 􏰆c1 􏰇2􏰆−1􏰇 􏰆1 􏰇2 = −1 2c2 = 2c1 (2c2 )2 􏰔K ρk 􏰊1(s−μk)2􏰋 ⟨x2⟩−⟨x⟩2 ⟨x3⟩ − ⟨x2⟩⟨x⟩ ⟨ x 4 ⟩ − ⟨ x 2 ⟩ 2 One Dimensional Mixture of Gaussians Density and Normalization = σ2 = 2σ2μ A.2.2 Moments A useful fact of MoG, is that ⟨xn⟩ = 􏰔 ρk⟨xn⟩k k where ⟨·⟩k denotes average with respect to the k.th component. We can calculate the first four moments from the densities p(x) = p(x) = 􏰔 1 􏰊 1(x−μk)2􏰋 ρk􏰀2πσ2exp−2 σk2 kk 􏰔 ρkCk exp 􏰃ck2x2 + ck1x􏰄 k (563) (564) as Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 65 ⟨x⟩ = 􏰒ρμ k k k = 0 = 0 = 􏰒ρσ2 =􏰒ρ􏰗−1􏰘 k k k k k 2ck2 = 0 k k k 2ck2 ¿From the un-centralized moments one can derive other entities like 2 2 2 ⟨x⟩ = 􏰒kρk(σk+μk) ⟨x3⟩ = 􏰒 ρ (3σ2μ +μ3) k k k k k ⟨x4⟩ = 􏰒 ρk(μ4 +6μ2σ2 +3σ4) = 􏰒 ρk 1 k k kk k k 2ck2 If all the gaussians are centered, i.e. μk = 0 for all k, then ck1 −6ck1 +3 2ck2 2ck2 ⟨x2⟩ − ⟨x⟩2 ⟨x3⟩ − ⟨x2⟩⟨x⟩ ⟨x4⟩ − ⟨x2⟩2 = 􏰒k,k′ ρkρk′ 􏰃μ2k + σk2 − μkμk′ 􏰄 = 􏰒k,k′ ρkρk′ 􏰃3σk2μk + μ3k − (σk2 + μ2k)μk′ 􏰄 = 􏰒k,k′ ρkρk′ 􏰃μ4k + 6μ2kσk2 + 3σk4 − (σk2 + μ2k)(σk2′ + μ2k′ )􏰄 ⟨x⟩ ⟨x2 ⟩ ⟨x3⟩ = 0 ⟨x4⟩=􏰒ρk3σ4 =􏰒ρk3􏰗−1􏰘2 A.2.3 Derivatives Defining p(s) = 􏰒k ρkNs(μk,σk2) we get for a parameter θj of the j.th compo- nent that is, Note that ρk must be constrained to be proper ratios. Defining the ratios by ρj =erj/􏰒kerk,weobtain ∂lnp(s) ρjNs(μj,σj2) ∂ln(ρjNs(μj,σj2)) ∂θ = 􏰒 ρ N (μ ,σ2) ∂θ (565) (566) B PROOFS AND DETAILS = 􏰒ρ􏰗−ck1􏰘 k k 2ck2 􏰊 −1 􏰆−ck1 􏰇2􏰋 = 􏰒kρk 2ck2 + 2ck2 = 􏰒 ρ 􏰗 ck1 􏰗3− c2k1 􏰘􏰘 k k (2ck2)2 2ck2 􏰊􏰆􏰇2􏰊􏰆􏰇2 2 􏰋􏰋 j kkskk j ∂ ln p(s) ρjNs(μj,σj2) 1 ∂ρ = 􏰒ρN(μ,σ2)ρ j kkskkj ∂ ln p(s) ρjNs(μj,σj2) (s−μj) ∂μ =􏰒ρN(μ,σ2)σ2 (567) j kkskk j ∂ ln p(s) ρjNs(μj,σj2) 1 􏰎(s−μj)2 􏰏 ∂σ =􏰒ρN(μ,σ2)σ σ2 −1 (568) j kkskkj j ∂lnp(s) =􏰔∂lnp(s)∂ρl ∂rj l ∂ρl ∂rj Proofs and Details Misc Proofs Proof of Equation 524 where ∂ρl =ρl(δlj −ρj) ∂rj (569) B B.1 B.1.1 The following proof is work of Florian Roemer. Note the the vectors and ma- trices below can be complex and the notation XH is used for transpose and conjugated, while XT is only transpose of the complex matrix. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 66 B.1 Misc Proofs B PROOFS AND DETAILS Define the row vector y = aH XB and the column vector z = XH c. Then aT XBXT c = yz = zT yT Note that y can be rewritten as vec(y)T which is the same as vec(conj(y))H = vec(aT conj(X)conj(B))H where ”conj” means complex conjugated. Applying the vec rule for linear forms Eq 520, we get y = (BH ⊗ aT vec(conj(X))H = vec(X)T (B ⊗ conj(a)) where we have also used the rule for transpose of Kronecker products. For yT this yields (BT ⊗ aH )vec(X). Similarly we can rewrite z which is the same as vec(zT ) = vec(cT conj(X)). Applying again Eq 520, we get z = (I ⊗ cT )vec(conj(X)) where I is the identity matrix. For zT we obtain vec(X)(I ⊗ c). Finally, the original expression is zT yT which now takes the form vec(X)H(I⊗c)(BT ⊗aH)vec(X) the final step is to apply the rule for products of Kronecker products and by that combine the Kronecker products. This gives vec(X)H(BT ⊗caH)vec(X) which is the desired result. B.1.2 Proof of Equation 493 For any analytical function f(X) of a matrix argument X, it holds that 􏰌∞􏰍 f (AB)A = 􏰔 cn(AB)n A B.1.3 Proof of Equation 91 Essentially we need to calculate n=0 ∞ = 􏰔 cn(AB)nA n=0 ∞ = 􏰔 cnA(BA)n n=0 ∞ = A 􏰔 cn (BA)n n=0 = Af (BA) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 67 B.1 Misc Proofs ∂(Xn)kl ∂Xij B PROOFS AND DETAILS = ∂ 􏰔 Xk,u1 Xu1,u2 ...Xun−1,l ∂Xij u1,...,un−1 = δk,iδu1,jXu1,u2 ...Xun−1,l +Xk,u1 δu1,iδu2,j...Xun−1,l . +Xk,u1 Xu1,u2 ...δun−1,iδl,j n−1 = 􏰔(Xr)ki(Xn−1−r)jl r=0 n−1 = 􏰔(Xr Jij Xn−1−r )kl r=0 Using the properties of the single entry matrix found in Sec. 9.7.4, the result follows easily. B.1.4 Details on Eq. 571 = det(XH AX)􏰁Tr[(XH AX)−1∂(XH )AX] +Tr[(XH AX)−1XH ∂(AX)]􏰂 = det(XH AX)􏰁Tr[AX(XH AX)−1∂(XH )] +Tr[(XH AX)−1XH A∂(X)]􏰂 First, the derivative is found with respect to the real part of X ∂ det(XHAX) = ∂RX det(XHAX)􏰆Tr[AX(XHAX)−1∂(XH)] ∂RX +Tr[(XHAX)−1XHA∂(X)]􏰇 ∂RX det(XH AX)Tr[(XH AX)−1∂(XH AX)] ∂ det(XH AX) = = det(XH AX)Tr[(XH AX)−1(∂(XH )AX + XH ∂(AX))] det(XHAX)􏰁AX(XHAX)−1 +((XHAX)−1XHA)T􏰂 (241), the derivative is found with respect to the imaginary part of X i∂det(XHAX) = idet(XHAX)􏰆Tr[AX(XHAX)−1∂(XH)] ∂IX ∂IX +Tr[(XHAX)−1XHA∂(X)]􏰇 ∂IX = det(XHAX)􏰁AX(XHAX)−1 −((XHAX)−1XHA)T􏰂 Hence, derivative yields ∂ det(XHAX) = 1􏰆∂ det(XHAX) − i∂ det(XHAX)􏰇 ∂X 2 ∂RX ∂IX = det(XH AX)􏰁(XH AX)−1 XH A􏰂T Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 68 = Through the calculations, (100) and (240) were used. In addition, by use of B.1 Misc Proofs B PROOFS AND DETAILS and the complex conjugate derivative yields ∂ det(XHAX) = ∂X∗ 1􏰆∂ det(XHAX) + i∂ det(XHAX)􏰇 2 ∂RX ∂IX det(XH AX)AX(XH AX)−1 Notice, for real X, A, the sum of (249) and (250) is reduced to (54). Similar calculations yield = ∂ det(XAXH) = 1􏰆∂ det(XAXH) − i∂ det(XAXH)􏰇 ∂X 2 ∂RX ∂IX and = ∂ det(XAXH) = ∂X∗ = det(XAXH )􏰁AXH (XAXH )−1 􏰂T 1􏰆∂ det(XAXH) + i∂ det(XAXH)􏰇 2 ∂RX ∂IX det(XAXH )(XAXH )−1 XA (570) (571) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 69 REFERENCES REFERENCES References [1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek- vationer. Studenterlitteratur, 1992. [2] J ̈orn Anemu ̈ller, Terrence J. Sejnowski, and Scott Makeig. Complex inde- pendent component analysis of frequency-domain electroencephalographic data. Neural Networks, 16(9):1311–1323, November 2003. [3] S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathe- matics and Computin Science Series. Clarendon Press, 1990. [4] Christopher Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995. [5] Robert J. Boik. Lecture notes: Statistics 550. Online, April 22 2002. Notes. [6] D. H. Brandwood. A complex gradient operator and its application in adaptive array theory. IEE Proceedings, 130(1):11–16, February 1983. PTS. F and H. [7] M. Brookes. Matrix Reference Manual, 2004. Website May 20, 2004. [8] Contradsen K., En introduktion til statistik, IMM lecture notes, 1984. [9] Mads Dyrholm. Some matrix results, 2004. Website August 23, 2004. [10] Nielsen F. A., Formula, Neuro Research Unit and Technical university of Denmark, 2002. [11] Gelman A. B., J. S. Carlin, H. S. Stern, D. B. Rubin, Bayesian Data Analysis, Chapman and Hall / CRC, 1995. [12] Gene H. Golub and Charles F. van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, 3rd edition, 1996. [13] Robert M. Gray. Toeplitz and circulant matrices: A review. Technical report, Information Systems Laboratory, Department of Electrical Engi- neering,Stanford University, Stanford, California 94305, August 2002. [14] Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River, NJ, 4th edition, 2002. [15] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985. [16] Mardia K. V., J.T. Kent and J.M. Bibby, Multivariate Analysis, Academic Press Ltd., 1979. [17] Mathpages on ”Eigenvalue Problems and Matrix Invariants”, http://www.mathpages.com/home/kmath128.htm [18] Carl D. Meyer. Generalized inversion of modified matrices. SIAM Journal of Applied Mathematics, 24(3):315–323, May 1973. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 70 REFERENCES REFERENCES [19] Thomas P. Minka. Old and new matrix algebra useful for statistics, De- cember 2000. Notes. [20] Daniele Mortari Ortho–Skew and Ortho–Sym Matrix Trigonometry John Lee Junkins Astrodynamics Symposium, AAS 03–265, May 2003. Texas A&M University, College Station, TX [21] L. Parra and C. Spence. Convolutive blind separation of non-stationary sources. In IEEE Transactions Speech and Audio Processing, pages 320– 327, May 2000. [22] Kaare Brandt Petersen, Jiucang Hao, and Te-Won Lee. Generative and filtering approaches for overcomplete representations. Neural Information Processing - Letters and Reviews, vol. 8(1), 2005. [23] John G. Proakis and Dimitris G. Manolakis. Digital Signal Processing. Prentice-Hall, 1996. [24] Laurent Schwartz. Cours d’Analyse, volume II. Hermann, Paris, 1967. As referenced in [14]. [25] Shayle R. Searle. Matrix Algebra Useful for Statistics. John Wiley and Sons, 1982. [26] G. Seber and A. Lee. Linear Regression Analysis. John Wiley and Sons, 2002. [27] S. M. Selby. Standard Mathematical Tables. CRC Press, 1974. [28] Inna Stainvas. Matrix algebra in differential calculus. Neural Computing Research Group, Information Engeneering, Aston University, UK, August 2002. Notes. [29] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice Hall, 1993. [30] Max Welling. The Kalman Filter. Lecture Note. [31] Wikipedia on minors: ”Minor (linear algebra)”, http://en.wikipedia.org/wiki/Minor_(linear_algebra) [32] Zhaoshui He, Shengli Xie, et al, ”Convolutive blind source separation in frequency domain based on sparse representation”, IEEE Transactions on Audio, Speech and Language Processing, vol.15(5):1551-1563, July 2007. [33] Karim T. Abou-Moustafa On Derivatives of Eigenvalues and Eigenvectors of the Generalized Eigenvalue Problem. McGill Technical Report, October 2010. [34] Mohammad Emtiyaz Khan Updating Inverse of a Matrix When a Column is Added/Removed. Emt CS,UBC February 27, 2008 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 71 Index Anti-symmetric, 54 Block matrix, 46 Chain rule, 15 Cholesky-decomposition, 32 Co-kurtosis, 34 Co-skewness, 34 Condition number, 62 Cramers Rule, 29 Derivative of a complex matrix, 24 Derivative of a determinant, 8 Derivative of a trace, 12 Derivative of an inverse, 9 Derivative of symmetric matrix, 15 Derivatives of Toeplitz matrix, 16 Dirichlet distribution, 37 Eigenvalues, 30 Eigenvectors, 30 Exponential Matrix Function, 59 Gaussian, conditional, 40 Gaussian, entropy, 44 Gaussian, linear combination, 41 Gaussian, marginal, 40 Gaussian, product of densities, 42 Generalized inverse, 21 Hadamard inequality, 52 Hermitian, 48 Idempotent, 49 Kronecker product, 59 LDL decomposition, 33 LDM-decomposition, 33 Linear regression, 28 LU decomposition, 32 Lyapunov Equation, 30 Moore-Penrose inverse, 21 Multinomial distribution, 37 Nilpotent, 49 Norm of a matrix, 61 Norm of a vector, 61 Normal-Inverse Gamma distribution, 37 Normal-Inverse Wishart distribution, 39 Orthogonal, 49 Power series of matrices, 58 Probability matrix, 55 Pseudo-inverse, 21 Schur complement, 41, 47 Single entry matrix, 52 Singular Valued Decomposition (SVD), 31 Skew-Hermitian, 48 Skew-symmetric, 54 Stochastic matrix, 55 Student-t, 37 Sylvester’s Inequality, 62 Symmetric, 54 Taylor expansion, 58 Toeplitz matrix, 54 Transition matrix, 55 Trigonometric functions, 59 Unipotent, 49 Vandermonde matrix, 57 Vec operator, 59, 60 Wishart distribution, 38 Woodbury identity, 18 72