程序代写代做代考 Estimating a mean recursively

Estimating a mean recursively
To estimate a mean recursive we consider the data set in Table 1
Table 1
N y ymean
1 1 1
2 3 2
3 5 3
4 3 3
5 3 3
6 9
7 4
8 4
Given the formula for the mean

we can rewrite this as

which gives the recursion relationship. Using the data in Table 1, we see that for line 6

which is the mean up to observation 6. As expected, once all of the data are in, the mean is identical to the mean of all observations. If we think of the sequence of numbers as a time series, with each new observation representing a new occasion, we could plot the means and by occasion and observe how the mean develops over time.
To begin the recursion, we need to know the value of the mean at N=1, which is trivially the value of the first observation. Note, however, that we could arbitrarily pick a number. If the sample were small, the effect of picking this number could be dramatic on the overall mean. As the sample gets larger, the effect of this initial selection would become less and less.
Estimating a simple regression recursively
The recursion for the mean is relatively simple. For a linear regression, the problem becomes slightly more complex. To show one way of developing the recursion, we begin with a simple linear regression model. To additionally simplify the model, we will assume that the data have been centered, so the intercept of the model is 0.
The equation for the simple regression model is

i=1,…,T (1)
Given this equation, the regression weight is indexed by i to suggest that we are interested in estimating the regression for the first i observations.
Just using the first r observations, the regression coefficient is

(2)
which can be rewritten as

(3)

Since and ,
equation 3 can be rewritten as

(4)

Dividing both sides by gives us the recursion

. (5)
The recursive residuals for this model are, of course, different than the set of residuals one would see in an ordinary regression model. Here, the prediction residual is

.
The recursive residuals are defined as

, r=k+1,…,T.
These residuals are independent. The recursion relationship

(6)
gives the residual sums of squares
To show how this works we will estimate the regression coefficient for one independent and one dependent variable that appears in Table 1.
One of the common problems with all recursive estimators is the dependence on starting values. Since the recursion requires a prior value of the parameter to update, we need that prior value. As we will see, the effect of the choice of starting values is dependent on the quality of those starting values and to the number of observations in the sample. With more observations the effect of the starting values diminishes. For many of the models we will deal with, some simple methods are available to increase the quality of the choice of starting values.
Estimating a multiple regression recursively
The multiple regression model is a straightforward extension of the simple regression model. Following the work of Brown et.al. (1975) based on the work by Plackett, we will present a general solution that can be used for any regression problem in which the residuals can be assumed uncorrelated.
The multiple regression model is

(6)
Where y is vector of the dependent values, X is a design matrix which includes a column of 1’s if the intercept is included, β, is the vector of regression weights, and ε is the vector of residuals which are assumed to be distributed ε ~ N(0,σε2).
The least squares solution for the regression coefficients using just the first r observations is

. (7)

Premultiplying both sides by we get

. (8)
Since

(9)
we can substitute (9) into (8) to yield

.

Multiplying both sides by yields the recursion relationship,

. (10)
To show how the recursion works for a multiple regression, we will use it calculate regression coefficients using both predictor variables in Table 1.

To simplify the calculations, we typically set up a recursion for the inverse of the SSCP matrix of the predictors, . The recursion originally presented by Plackett is

. (11)
The recursive residuals were presented by Brown et.al (1975) as

. (12)
Given this formula, the recursive estimation of the residual sums of square after fitting the first r observations is

. (13)
Once again, the residuals are different than the set of residuals one would get from an ordinary regression. However, once all of the observations are in, Sr gives you σε2.

r
r
r
i
i
i
r
i
i
i
r
r
i
i
y
x
y
x
y
x
x
+
=
=
å
å
å

=
=
=
1
1
1
1
2
)
(
)
(
)
(
b

å
å

=


=
=
1
1
2
1
1
1
)
(
r
i
i
r
r
i
i
i
x
y
x
b

2
1
1
2
1
2
r
r
i
i
r
i
i
x
x
x
+
=
å
å

=
=

)
(
)
(
)
(
)
(
1
1
2
1
1
2
2
1
1
2
x
y
x
x
y
x
x
x
x
r
r
r
r
i
i
r
r
r
r
i
r
i
r
r
r
i
i

=

=

=

+
=
+

=
å
å
å
b
b
b
b

å
=
r
i
i
x
1
2
)
(

)
(
)
(
1
1
1
2
1
r
r
r
r
r
i
i
r
r
x
y
x
x


=


+
=
å
b
b
b

r
r
r
x
y
1


b

)
)
(
1
(
1
1
1
1
r
r
T
r
T
r
r
T
r
r
x
X
X
x
b
x
y
w




+

=

2
1
r
r
r
w
S
S
+
=

e
b
+
=
X
y

)
(
)
(
1
r
T
r
r
T
r
r
Y
X
X
X

=
b

)
(
r
T
r
X
X

r
r
r
r
T
r
r
r
r
T
r
r
T
r
r
r
T
r
y
x
b
X
X
y
x
Y
X
Y
X
X
X
+
=
+
=
=






1
1
1
1
1
1
)
(
)
(
b

T
r
r
r
T
r
r
T
r
x
x
X
X
X
X
+
=


1
1
)
(

r
r
r
T
r
r
r
T
r
r
r
T
r
y
x
b
x
x
X
X
X
X
+

=


1
1
)
(
)
(
b

)
(
1
1



+
=
r
T
r
r
r
r
r
T
r
b
x
y
x
b
X
X

1
)
(

r
T
r
X
X

)
(
)
(
1
1
1




+
=
r
T
r
r
r
r
T
r
r
r
b
x
y
x
X
X
b
b

1
)
(

r
T
r
X
X

r
r
T
r
T
r
r
T
r
T
r
r
r
T
r
r
T
r
r
T
r
x
X
X
x
X
X
x
x
X
X
X
X
X
X
1
1
1
1
1
1
1
1
1
1
1
1
1
)
(
1
)
(
)
(
)
(
)
(













+

=

)
)
(
1
(
1
1
1
1
r
r
T
r
T
r
r
T
r
r
r
x
X
X
x
b
x
y
w




+

=

2
1
r
r
r
w
S
S
+
=

i
i
i
i
X
y
e
b
+
=

å
å
=

=
=
r
i
i
i
r
i
i
r
y
x
x
1
1
1
2
)
(
)
(
b