CS代考 HDDA Tutorial: Matrices and Factor Analysis :

HDDA Tutorial: Matrices and Factor Analysis :
Department of Econometrics and Business Statistics, Monash University Tutorial 9
Consider the factor model
y = Λf + ξ

Copyright By PowCoder代写 加微信 powcoder

where y is a p×1 vector of observed variables, f is an r×1 vector of latent factors with r < p, Λ is a matrix of loadings and ξ is a p × 1 vector of idiosyncratic errors variables. Also assume that f ∼ N (0, I) and ξ ∼ N(0,Ψ). 1. What are the dimensions of Λ? The matrix Λ must be a p × r matrix. It transforms the r-dimensional factors into points in higher p- dimensional space. 2. What is the expected value of y? E [y] = E [Λf + ξ] (1) = E [Λf] + E [ξ] (2) = ΛE [f] + E [ξ] (3) = Λ0 + 0 (4) The key here is to recognise that Λ is not a random variable so can be taken outside the expectation. In general for data that is not mean zero, an intercept can be included. 3. Derive the expected variance covariance matrix of y. Hint, you can use a rule of matrices that (AB)′ = B′A′ E[yy′] = E [(Λf + ξ)(Λf + ξ)′] (5) = E 􏰀(Λf + ξ)((Λf)′ + ξ′)􏰁 (6) = E 􏰂(Λf + ξ)(f′Λ′ + ξ′)􏰃 (7) = E 􏰂Λff′Λ′ + Λfξ′ + ξf′Λ′ + ξξ′􏰃 (8) = E 􏰂Λff′Λ′􏰃 + E 􏰀Λfξ′􏰁 + E 􏰂ξf′Λ′􏰃 + E 􏰀ξξ′􏰁 (9) =ΛE[ff′]Λ′ +ΛE􏰀fξ′􏰁+E[ξf′]Λ′ +E􏰀ξξ′􏰁 (10) There are four expectations. First, E [ff′] is equal to I. The matrices E 􏰀fξ′􏰁 and E 􏰀fξ′􏰁 are made up of covariances between factors and idiosyncratic errors which are assumed to be 0. Finally E 􏰀ξξ′􏰁 is assumed to be a diagonal matrix Ψ. Putting all this together yields E [yy′] = ΛΛ′ + Ψ