CS代考 Nonlinear econometrics for finance HOMEWORK 2

Nonlinear econometrics for finance HOMEWORK 2
Detailed solutions
Functions, minimization/maximization and GMM
This homework consists of a review of Matlab functions and minimization/maximization (Question 1). Matlab functions and minimization are then used (in Question 2) to estimate the consumption CAPM using GMM. You will use maximization in the next homework.

Copyright By PowCoder代写 加微信 powcoder

Instructions. For Question 1, you should only provide the Matlab code. For Question 2, you should provide two files: one with answers to all questions and one with your Matlab code. Please make sure that all codes can run properly.
Problem 1. (30 points.) Use Matlab to answer the following questions.
1. (5 points.) Write a function in Matlab that takes as input a 2 × 1 vector x = (x1, x2)
and returns the value
y = x2 + 3×2 + 2x x1/2 1212
(Hint: You can find useful information about the definition of functions by clicking here: functions.)
Answer. See Matlab file homework2 q1.m. The function is
function f = quadratic_function(x) if length(x)>2
printf(“Error, x should be a 2-dimensional vector”)
f = x(1)^2 + 3*x(2)^2 + 2*x(1)*(x(2)^0.5);
Clearly, the error message in the function above is not necessary.
2. (3 points.) Using Matlab, evaluate the value of the function when x = (1, 1). What if x = (3, 2)?

Answer. See Matlab file homework2 q1.m. We just compute the value of the function at the corresponding vector x.
—– Question 2 ———
The value of the quadratic function at (1,1) is 6.00 The value of the quadratic function at (3,2) is 29.49
3. (5 points.) Write a code that finds the minimizer (x∗1,x∗2) and the minimum f(x∗1,x∗2) of the function. (Hint: You should use the command fminsearch. You can find useful information about this command by clicking here: fminsearch.)
Answer. See Matlab file homework2 q1.m. The code is as follows. (You can change the options, if you want.)
x0 = [2,10]; % starting value (try different starting values …)
options = optimset(‘Display’,’off’);
[xmin,fmin] = x0, options); fprintf(‘—- Question 3 ———- \n’)
fprintf(‘The value of the minimizer (first component) is
%2.2f \n’, xmin(1));
fprintf(‘The value of the minimizer (second component) is
%2.2f \n’, xmin(2));
fprintf(‘The value of the function at the minimum is %4.2f
\n’, fmin);
The value of the minimizer (first component) is -0.41 The value of the minimizer (second component) is 0.17 The value of the function at the minimum is -0.08
4. (5 points.) Now, write another function, that takes 3 inputs: 1) a N × 1 vector of parameters θ, 2) a N ×1 vector x, and 3) a N ×N matrix A. The output of the function is z = (x − θ)⊤A(x − θ), where the notation x⊤ is the transpose of x. We want to minimize this function with respect to θ. Use fminsearch to find the vector θ that minimizes this function when the vector x and the matrix A are
1 0 0 0 A=0 1 0 0 0 0 1 0
(Hint: you should think of θ as a vector of variables while x,A is your “data”. You can add “data” in fminsearch by just adding inputs after the options.)
Answer. See Matlab file homework2 q1.m. The function is 2

function f = quadratic_form(theta,x,A) f = (x-theta)’*A*(x-theta);
The script for the minimization is
x = [1 2 3 4]’;
A = eye(4);
theta0 = [20 20 20 20]’; % starting value (try different
starting values …)
options = optimset(‘Display’, ‘iter’); [thetamin,fmin] = theta0,
options ,x,A);
fprintf(‘—- Question 4 ———- \n’)
fprintf(‘The value of the minimizer (first component) is
%2.2f \n’, thetamin(1));
fprintf(‘The value of the minimizer (second component) is
%2.2f \n’, thetamin(2));
fprintf(‘The value of the minimizer (third component) is
%2.2f \n’, thetamin(3));
fprintf(‘The value of the minimizer (fourth component) is
%2.2f \n’, thetamin(4));
fprintf(‘The value of the function at the minimum is %4.2f
\n’, fmin);
Notice that we added the data x and A as the two last inputs in the fminsearch
algorithm:
—- Question 4 ———-
The value of the minimizer
The value of the minimizer
The value of the minimizer
The value of the minimizer
The value of the function at the minimum is 0.00
(first component) is 1.00 (second component) is 2.00 (third component) is 3.00 (fourth component) is 4.00
5. (2 points.) Use fminsearch to find the vector θ that minimizes this function when the vector x and the matrix A are
1 0.5 0.2 0.3 A=0.5 1 0.4 0.5 0.2 0.4 1 0.2
0.3 0.5 0.2 1
Answer. The script for the minimization is

x = [1 2 3 4]’;
A = [1 0.5 0.2 0.3; 0.5 1 0.4 0.5; 0.2 0.4 1 0.2; 0.3 0.5
theta0 = [20 20 20 20]’; %initial value options = optimset(‘Display’, ‘off’); [thetamin,fmin,exitflag,output] = fminsearch(@
quadratic_form , theta0 , options ,x,A);
fprintf(‘—- Question 5 ———- \n’)
fprintf(‘The value of the minimizer (first component) is
%2.2f \n’, thetamin(1));
fprintf(‘The value of the minimizer (second component) is
%2.2f \n’, thetamin(2));
fprintf(‘The value of the minimizer (third component) is
%2.2f \n’, thetamin(3));
fprintf(‘The value of the minimizer (fourth component) is
%2.2f \n’, thetamin(4));
fprintf(‘The value of the function at the minimum is %4.2f
\n’, fmin);
Notice that we added the data x and A as the two last inputs in the fminsearch
algorithm.
—- Question 5 ———-
The value of the minimizer (first component) is 1.00 The value of the minimizer (second component) is 2.00 The value of the minimizer (third component) is 3.00 The value of the minimizer (fourth component) is 4.00 The value of the function at the minimum is 0.00
6. (5 points) Now, write a new function that takes 2 inputs: 1) A T × 1 vector x and 2) a scalar θ. The output of the function is
L = T Answer. The function is
function L=like_bernoulli2(theta,x)
L = -mean( x.*log(theta) + (1-x).*log(1-theta) );
Note that I have flipped the sign of the function, so as to compute its value with opposite sign. This is important if you want to maximize the function, rather than minimize it. You still use the fminsearch algorithm but, by inverting the sign of the function, you obtain the maximum.
[xt log(θ) + (1 − xt) log(1 − θ)] (1)

7. (5 points) Generate the vector x using the following code in Matlab.
T = 10000;
x = random(’Binomial’, 1, 0.3, 1, T)’;
Compute the value of θ that maximizes the function in Eq. (1) using fminsearch in Matlab. (Hint: You need to use the fact that
maxf(θ) = min(−f(θ)) θθ
In other words, computing the maximum of a function is equivalent to computing the minimum of the function with a minus sign in front of it.)
Answer. The script for the maximization is the following:
T = 10000;
x = random(‘Binomial’, 1, 0.3, 1, T)’;
options = optimset(‘Display’, ‘iter’);
thetahat = .2, options, x); fprintf(‘—- Question 7 ———- \n’)
fprintf(‘The value of the minimizer is %2.3f \n’, thetahat)
Notice that we added the data x as the last input in fminsearch. —- Question 7 ———-
The value of the minimizer is 0.306

An observation about Eq. (1). Previously, we maximized this function but we did not discuss its logic. It turns out that Eq. (1) is the (standardized, log) likelihood of a Bernoulli sample. Let us see. Consider a sample (x1, x2, …, xT ) of Bernoulli random variables with T observations. As you know from your statistics classes, these are random variables which take on the value 1 with probability p and the value 0 with probability 1 − p. Hence,
= p(xT,xT−1,··· ,p1)
= p(xT )p(xT −1) · · · p(x1) T
= 􏰎p(xt) t=1
= 􏰎pxt(1−p)(1−xt).
Note that p(xt) = pxt(1−p)(1−xt) because, if xt = 1, we obtain p. If xt = 0, we obtain (1−p). This is consistent with the Bernoulli nature of the random variables. We can now compute the (standardized, log) likelihood, thereby confirming Eq. (1):
11􏰳T􏰂 T logL({x},p) = T log 􏰎pxt(1−p)(1−xt)
log(pxt (1 − p)(1−xt)) 􏰡logpxt +log(1−p)(1−xt)􏰢
= T xt logp+ T (1−xt) log(1−p).
What you did in this assignment is, therefore, maximizing a Bernoulli likelihood with respect to θ = p. Not surprisingly, the maximization returned the true value p = 0.3. Maximum Likelihood (ML) estimation gives you consistent estimates of the parameters.
􏰳1􏰍T􏰂􏰳1􏰍T 􏰂

Problem 2. (70 points.) Consider, as we did in class, a representative investor who lives for two periods (t and t+1) and has income et in period t and et+1 in period t+1. The utility function of the representative investor is:
U(ct, ct+1) = u(ct) + βEt[u(ct+1)].
The investor can invest in an asset by buying θ shares at the unit price pt. The asset’s payoff xt+1 = pt+1 + dt+1 in the second period is uncertain. The investor chooses how many units (θ) of the asset to buy in order to maximize her/his utility function:
max u(ct) + βEt[u(ct+1)], θ
subject to the income/wealth constraints
ct =et−θpt,
ct+1 = et+1 + θxt+1. 1. (2 points.) Assume the investor has a CRRA utility:
c1−γ u(ct) = t .
Derive the economy’s pricing equations both in terms of prices and in terms of returns.
Answer. The maximization problem of the representative agent is
􏰮(ct+1)1−γ 􏰯􏰯 θ1−γ1−γθ1−γ1−γ
􏰮(et − θpt)1−γ 􏰮(et+1 + θxt+1)1−γ 􏰯􏰯 + βEt .
We take a derivative with respect to θ, and set it equal to zero, to obtain the following
equation (first-order condition):
pc−γ=E􏰣βc−γx 􏰤.
t t t t+1 t+1
Dividing both sides by c−γ, we can rewrite the pricing equation as
􏰃 􏰬ct+1􏰭−γ 􏰄 pt=Etβc xt+1.
This is a pricing equation in terms of prices. If we divide both sides by pt we obtain the pricing equation in terms of returns, rather than price levels:
􏰃 􏰬ct+1􏰭−γ 􏰄
1=Et β c 7
(1+Rt+1) .

This equation holds for any time period t. Using the law of iterated expectation and subtracting 1 from each side, we can write the unconditional expectation of the pricing errors as being zero:
􏰃 􏰬ct+1􏰭−γ 􏰄
(1+Rt+1)−1 =0.
This makes sense: if the model is correctly specified, expected pricing errors should be zero. Importantly, expected pricing errors should be zero for all assets in the economy. Suppose we have N assets, then we can use N equations as moment conditions for estimation:
􏰃 􏰬ct+1􏰭−γ
􏰡1+Ri 􏰢−1 =0fori=1,…,N.
2. (2 points.) Use the return equation to derive estimable moment conditions.
Answer. Let us define the pricing error for asset i at time t as 􏰬 ct+1 􏰭−γ
g(Xi ,β,γ)=β 􏰡1+Ri 􏰢−1, t+1 ct t+1
where Xi = 􏰕ct+1 ,Ri 􏰖. Our (estimable) sample moment conditions can be written t+1 ct t+1
as the sample means of the pricing errors g(Xi , β, γ)s, that is t+1
 􏰮􏰪􏰫−γ􏰡 􏰢􏰯 1􏰉T−1 β ct+1 1+R1 −1
Tt=1 ct t+1  1􏰉T−1􏰮 􏰪ct+1􏰫−γ􏰡 2 􏰢 􏰯
β1+R−1 Tt=1 ct t+1 
gT (β, γ) =    . 
 1􏰉T−1􏰮􏰪ct+1􏰫−γ􏰡 N􏰢 􏰯
T t=1 β ct 1+Rt+1 −1 The GMM estimator solves the following minimization:
are your asset returns for 10 assets.
βT,γ􏰑T = argmingT(β,γ) WTgT(β,γ).
3. (6 points.) The file ccapmmonthlydata.xls contains monthly data (not quarterly, as
used in the sample GMM code on OneDrive) on consumption growth and asset returns
from February 1959 to November 1993. The first column contains the date and the
second column contains the time series for consumption growth ct+1 . Columns 3-12
Use the ccapmmonthlydata.xls data on 10 portfolios to estimate the parameters of the C-CAPM using the GMM estimator. Let N denote the number of assets/portfolios and let d be the number of parameters to estimate.

(Hint: note that the data contains consumption growth ct+1 for t = 1,…,T and not ct
consumption levels ct for t = 1, …, T . You should, therefore, modify the code I provided in order to calculate the moments correctly.)
Compute the first-stage GMM estimates of the d model parameters using the weight matrixWT =IN.
Answer. The estimates corresponding to the first stage are contained in the following table, in the first column.
Stage 1 β 0.57724
Note: the values of Std Error and t stats are computed using the symbolic math toolbox in Matlab. The values of Std
Error 2 and t stats 2 are computed using analytical derivatives (i.e., derivatives computed by hand).
4. (10 points.) Compute the second-stage estimates by re-estimating the parameters us- ing the optimal weight matrix. Assume your data are i.i.d. The second-stage estimates should be used in all questions below.
Answer. The second-stage estimates are in the previous table, in the second column.
5. (5 points.) Interpret your estimation results in economic terms. What do you learn
about the representative investor?
Answer. The estimate for the subjective discount factor is 0.87, which is quite reason-
able. We tend to discount future consumption (in this case, consumption in a month)
as compared to consumption today. However, the estimate for the risk-aversion coef-
ficient (γ􏰑 = 47) is rather high. This result points to the fact that the representative
investor is very risk averse. This is not overly surprising, as most people are risk averse.
However, this estimate is very imprecise, as shown by the large standard error (and
the small t statistic). Therefore, we should be cautious about this parameter estimate.
(You can interpret γ as a coefficient of relative risk-aversion γ = −cu′′(c), see my notes u′ (c)
entitled CARA utility and risk aversion.pdf, if you are interested).
6. (10 points.) Compute the asymptotic variance of the GMM estimator. Please note: the matrix Γ0 should be estimated without numerical differentiation (which is what the sample GMM code does). In other words, you should compute the gradient by hand.
Answer. The matrix Γ0 contains the derivatives of the pricing errors. In the case of 10 assets and 2 parameters, this is a 10 × 2 matrix. Note that the derivatives with respect to the parameters β and γ are
Stage 2 0.87029 47.871
Std Error 0.2023 61.227
t stats 4.302 0.78186
Std Error 2 0.2023 61.227
t stats 2 4.302 0.78186

∂gTi (β,γ) ∂γ
1 􏰍 􏰮ct+1􏰯􏰬ct+1􏰭 i
∂gTi (β,γ) 1 􏰍􏰬ct+1􏰭 i
(1 + Rt+1) t
= T −β log c c (1 + Rt+1)
for i = 1, · · · , 10 and, therefore, we can fill in the entries of the matrix Γ􏰑T
as follows
1􏰉T−1􏰪ct+1􏰫−γ􏰑T(1+R1 )−1􏰉T−1βlog􏰕ct+1􏰖􏰪ct+1􏰫−γ􏰑T(1+R1 )
T t=1 ct t+1  1 􏰉T−1 􏰪ct+1 􏰫−γ􏰑T 2
􏰕ct+1 􏰖􏰪ct+1 􏰫−γ􏰑T Tt=1ct t+1 Tt=1T ct ct t+1
1 􏰉T−1 􏰑
 t+1 T t+1
1􏰉T−1􏰪ct+1􏰫−γ􏰑T(1+R3 )−1􏰉T−1βlog􏰕ct+1􏰖􏰪ct+1􏰫−γ􏰑T(1+R3 )
Tt=1ct Tt=1􏰑ct ct
 1 􏰉T−1 􏰪ct+1 􏰫−γ􏰑T
􏰕ct+1 􏰖􏰪ct+1 􏰫−γ􏰑T t=1 βT log ct ct
(1+Rt+1) −T 􏰑
 T t=1 ct
􏰉 􏰕 􏰖􏰪 􏰫−γ􏰑T
(1+Rt+1) 
 􏰉 􏰪 􏰫−γ􏰑T
T −1 β log ct+1 ct+1 t+1 T t=1 T ct ct
ct+1 ct+1 ct ct
􏰕 ct+1 􏰖􏰪ct+1
1 T −1 ct+1 T t=1 ct
(1 + R5 ) − 1
(1 + R5 ) (1+R6 )
Γ􏰑T= 􏰉 􏰪 􏰫−γ􏰑T 1 T−1 ct+1
􏰉 (1+R6)− β􏰑Tlog
􏰫 − γ􏰑 T 􏰫−γ􏰑T 􏰫 − γ􏰑 T 􏰫 − γ􏰑 T 􏰫−γ􏰑T
T t=1 ct T t=1 ct
t+1 T t=1 7 1 􏰉T−1
1􏰉T−1􏰪ct+1 􏰫−γ􏰑T
(1+Rt+1) −T t=1 βT log 􏰉
(1+Rt+1) 
􏰉􏰪􏰫−γ􏰑T 1T−1ct+1
10 1 􏰉T−1 (1+Rt+1) −T t=1 βT log ct
10  (1 + Rt+1)
t+1 T t=1􏰑
(1+R8)−1T−1β􏰑log
t+1 T t=1 T ct ct
(1+R8 ) t+1 
􏰉 􏰪 􏰫−γ􏰑T T −1 ct+1
􏰉T −1 􏰪 ct+1 􏰫−γ􏰑T
The resulting standard errors are in the table for Question 3.
7. (10 points.) Inference. Test whether β = 0.9.
9 1T−1 (1+R)− βTlog
ct+1 ct+1 ct ct
􏰕 ct+1 􏰖􏰪ct+1 ct
Answer. To test this null hypothesis, we can use a standard t statistic. However, note that in large samples there is no need to use the t distribution. You use the normal distribution. (In fact, in large samples, the normal and t distributions are pretty much the same). See my Lecture Notes entitled Asymptotic theory and testing.pdf. So, we compute the test
(1+R) t+1
and employ normal critical values (or p-values based on the normal distribution). In the code provided with the solutions, I also compute the p-value with the t distribution, in order to show that the tests are equivalent. Indeed, we get a p-value of 0.883 using the normal distribution and a p-value of 0.883 using the t distribution. We conclude that we cannot reject the null hypothesis that β = 0.9.

8. (10 points.) Inference. Test whether γ = 4 and β = 0.9 jointly.
Answer. This is a joint test that is virtually identical to the multiple linear restrictions test used in linear econometrics and in the first homework. However, here we do not employ the F distribution to compute critical values (or p-values). We use the Chi- squared distribution, because we work asymptotically. See, again, my Lecture Notes entitled Asymptotic theory and testing.pdf.
The null hypothesis for this test is
H0 :γ=4andβ=0.9.
We can write the null in matrix form using a matrix R and vector r 􏰮 1 0 􏰯 􏰮 0.9 􏰯
R=01,r=4. The linear restrictions become
􏰮10􏰯􏰮β􏰯 􏰮0.9􏰯 Rθ=r=⇒ 0 1 γ = 4 .
d 􏰬 1􏰡⊤ −1 􏰢−1􏰭 􏰬 1 􏰡⊤ −1 􏰢−1 ⊤􏰭 RθT􏱩RN θ, Γ0Φ0 Γ0 =N Rθ, RΓ0Φ0 Γ0 R .
We can now rewrite the same equation by subtracting the mean:
d 􏰬1􏰡⊤−1􏰢−1⊤􏰭 RθT−Rθ􏱩N 0, RΓ0Φ0 Γ0 R
By standardizing, we can then define the variable
􏰬 1􏰡⊤ −1 􏰢−1 ⊤􏰭−1/2􏰪 􏰫d
Z2= R Γ0Φ0 Γ0 R RθT−Rθ 􏱩N(0,I2) T
and compute the inner product of Z2, given by the following quantity
The inner product of N independent normal random variables is Chi-square with N degrees of freedom. In this case, we are computing the inner product of 2 independent (asymptotically) normal random variables. This inner product is (asymptotically) Chi- square with 2 degrees of freedom. Therefore, under the null hypothesis:
⊤ 􏰪 􏰫⊤􏰬 1􏰡⊤ −1 􏰢−1 ⊤􏰭−1􏰪 􏰫d 2
In class, we have shown that the GMM estimator θT is asymptotically normal. There- 􏰑
fore, we have that RθT is also asymptotically normal, with mean multiplied by R and variance-covariance matrix pre-multiplied by R and post-multiplied by R⊤:
Z2 Z2 = RθT − Rθ R
Γ0 Φ0 Γ0 R RθT − Rθ 􏱩 χ2.
⊤ 􏰪 􏰫⊤􏰬 1􏰡⊤ −1 􏰢−1 ⊤􏰭−1􏰪 􏰫d 2 􏰑T􏰑
Z2 Z2 = RθT − r R
Γ0 Φ0 Γ0 R RθT − r 􏱩 χ2 11

If we replace the variance V(θ ) = 1 􏰡ΓT Φ−1Γ 􏰢−1 in the formula above, with its
1􏰪 −1 􏰫−1 􏰑􏰑 T􏰑􏰑T􏰑
consistent estimate V(θT ) = ΓT Φ ΓT , the test statistics converges to the same χ2 distribution. This test is called Wald test. The test statistics is
􏰪 􏰫⊤􏰬1􏰪⊤−1 􏰫−1⊤􏰭−1􏰪 􏰫d 2 􏰑 T􏰑􏰑T􏰑 􏰑
W=RθT−r R ΓTΦΓT R RθT−r􏱩χ2.
The value of the statistics is 91.853 and the p-value is 0. Therefore, we reject the joint
null hypothesis γ = 4 and β = 0.9 at any significance level.
Note: In general, if there are q restrictions, the test statistic will converge to a χ2q.
9. (10 points.) Inference. Test whether 50β = γ.
Answer. We can set up the test in the same way as in the previous question (and test the null as a one-sided test of multiple restriction with q = 1). We could also set up a two-sided test of single restriction as in Question 7. (See, again, my Lecture Notes entitled Asymptotic theory and testing.pdf). Here, we use a one-sided test as in the answer to Question 6. The null hypothesis for this test is
H0 : 50β = γ.
We can write the null in matrix form using the matrix R and the vector r:
R = 􏰣50 −1􏰤, r=0. 􏰣 􏰤􏰮β􏰯
Rθ=r=⇒ 50 −1 γ =0 􏰫⊤􏰬1􏰪⊤−1 􏰫−1⊤􏰭−1􏰪
The value of the test statistics is 0.0037 with a p-value of 0.9517. Thus, we fail to
reject the null hypothesis.
10. (5 points.) Inference. Test for over-identifying restrictions.
Answer. We use the test statistic developed by . For our sample, we have N = 10 and d = 2, therefore the test will be distributed as a Chi-Square distribution with 8 degrees of freedom:
The linear restriction is
􏰫d 2 W=RθT−r R ΓTΦΓT R RθT−r􏱩χ1.
TQT(θT)􏱩d χ2N−d=χ28.
The test statistic is 6.1137 with a p-value of 0.6345. Thus, we fail to reject the null (of zero pricing errors).

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com