程序代写 Math 558 Lecture #12

Math 558 Lecture #12

Sum of Squares
Definition 1

Copyright By PowCoder代写 加微信 powcoder

Let W be any subspace of V. The sum of squares for W means either ||PWY||2or ||PWy||2. The degrees freedom for W are the same as dim(W). The mean square for W is
MS(W) = sum of squares for W = ||PWY||2 degrees of freedom for W dim(W)
Definition 2
The expected mean square for W is
EMS(W) = E(||PWY||2) dim(W )

Sum of Squares
Let W = VT. We have seen in lecture 10 that t SUMT =i
PVT Y = r u ∑i
Definition 3
The sum of squares for VT is ||PVT Y||2. It can be written as 2 􏰀t SUMT=i 􏰁􏰀t SUMT=i 􏰁
t ||PVT Y||2 = ∑ i=1
(SUMT =i )2
||PVTY||=∑r ui.∑r ui i=1 i i=1 i
||PVT Y||2 = ∑t (SUMT=i)2 i=1 ri

Crude Sum of Squares
The quantity ∑t (SUMT=i)2 is called crude sum of squares for i=1 ri
treatments. The degrees of freedom for V = dim(VT ) = t. Therefore, means squares for VT is
∑t SUMT=i)2 􏰿
i=1 From lecture 9 we have the result
E(||PWY||2) = σ2dimW + ||PWτ||2 replacing W by VT we have
E(||PVT Y||2) = σ2dimVT + ||PVT τ||2

Crude Sum of Squares
E(||PVT Y||2) =σ2dimVT + ||PVT τ||2
=tσ2 + ||τ2|| t
The expected mean squares forVT is
t EMS(VT) = σ2 + ∑riτi2􏰽t
1as τ = ∑ti=1 τiui 1as τ = ∑ti=1 τiui

Sum of Squares
Now consider VT⊥ which is a subspace orthogonal to VT . We have seen that the vector y in V can be expressed as the linear combination
y = P V T y + P V T⊥ y Also, PVT y = ∑ti=1 τˆiui Therefore,
P V T⊥ y = y − P V T y t
=y− τˆu ∑ii
Here y is the vector of observed values and ∑ti=1 τˆiui is the vector of fitted values. The difference gives the vector of residuals.

Sum of Squares
||y||2 = ||PVT y||2 + ||PVT⊥ y||2
||y||2 = ∑w y2w = total sum of squares
||PVT y||2 = is the total (crude) sum of the squares for the treatments ||PVT⊥ y||2 = sum of squares for residuals.
Degrees of freedom for residuals is the dim(VT⊥) = N − t Therefore the mean square residual is
MS(residual) = sum of squares residuals N−t
as dim(VT⊥ ) = dim(V ) − dim(VT ) = N − t

Sum of Squares
E(||PVT⊥ Y||2) =σ2dim(VT⊥) + ||PVT⊥ τ||2 =σ2(N − t)
Therefore, EMS for residual = σ2

Variance and Standard Errors
From lecture 11 we know that the variance of the estimator (x.Y) of ∑t λiτi is σ2 ∑t λ2i .
i=1 Further,
􏰞tλ2 􏰾 tλ2 E ∑i×MS(residual) =σ2∑i i=1 ri i=1 ri
× MS(residual) is the unbiased estimator of σ2 ∑t
λ2i i=1 ri
λ2i . i=1 ri
The standard error of x.Y is 􏰆∑t λ2i i=1 ri
× MS(residual)

Variance and Standard Errors
The standard error for τˆi is
􏰙 MS(residual) ri
and the standard error of a difference τˆi − τˆj is 􏱀􏰀11􏰁
MS(residual) r + r ij

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com