留学生辅导 ECON3206/5206: Review of CLT & LLN

ECON3206/5206: Review of CLT & LLN

Copyright c©Copyright University of Wales 2020. All rights reserved.

Copyright By PowCoder代写 加微信 powcoder

Course materials subject to Copyright
UNSW Sydney owns copyright in these materials (unless stated otherwise). The material is subject to copyright under Australian law and overseas under
international treaties.
The materials are provided for use by enrolled UNSW students. The materials, or any part, may not be copied, shared or distributed, in print or digitally,
outside the course without permission.
Students may only copy a reasonable portion of the material for personal research or study or for criticism or review. Under no circumstances may these
materials be copied or reproduced for sale or commercial purposes without prior written permission of UNSW Sydney.

Statement on class recording
To ensure the free and open discussion of ideas, students may not record, by any means, classroom lectures, discussion and/or activities without the
advance written permission of the instructor, and any such recording properly approved in advance can be used solely for the student?s own private use.

WARNING: Your failure to comply with these conditions may lead to disciplinary action, and may give rise to a civil action or a criminal offence under the
THE ABOVE INFORMATION MUST NOT BE REMOVED FROM THIS MATERIAL.

Dr. (ECON3206/5206) Pre-requisite c©UNSW 1 / 2

ECON3206/5206: Review of CLT & LLN

School of Economics

c©Copyright University of Wales 2020. All rights reserved. This copyright notice
must not be removed from this material

Dr. (ECON3206/5206) Pre-requisite c©UNSW 2 / 2

Topic 1. Features of Some Financial Time Series

• LLN and CLT – pillars of all statistics
– Let {𝑍𝑍1,𝑍𝑍2, … ,𝑍𝑍𝑇𝑇} be a set of independent RVs

with common mean 𝜇𝜇 and variance 𝜎𝜎2.
– Law of large numbers: the probability that

∑ 𝑍𝑍𝑡𝑡𝑇𝑇𝑡𝑡=1 differs from 𝜇𝜇 converges to zero as

𝑇𝑇 goes to infinity.

– Central limit theorem: the distributions of 𝑍𝑍

converges to 𝑁𝑁(0,1) as 𝑇𝑇 goes to infinity.

– Note that Var �̅�𝑍 = 𝜎𝜎2/𝑇𝑇 (see Rule 8).
What happens if {𝑍𝑍1,𝑍𝑍2, … ,𝑍𝑍𝑇𝑇} are correlated?

UNSW Business School,

Slides-01, Financial Econometrics 22

Distribution of Sample Mean

Consider N random variables X1, · · ·XN .
• Let’s consider X̄ = 1

• X̄ is called the “sample mean” or the “empirical mean”.
• X̄ is a random variable.

Suppose we observe values for X1, · · ·XN and calculate the empirical
mean of the observed values. That gives us one value for X̄ . But the
value of X̄ changes depending on the observed values.

• Suppose we toss a fair coin N = 5 times and get H,H,H,T ,T . Let
Xk = 1 when come up heads. Then

• Suppose we toss the coin another N = 5 times and get
T ,T ,H,T ,H. Now 1

Distribution of Sample Mean

Toss fair coin N = 5 times and calculate

k=1 Xk . Repeat, and plot
histogram of values. Its binomial Bin(N, 1

• X=[];for i=1:10000,X=[X,sum((rand(1,5)<0.5))]; end; hist(X,50) 0 1 2 3 4 5 Distribution of Sample Mean Random variable X̄ = 1 • Suppose the Xk are independent and identically distributed (i.i.d) • Each Xk has mean E (Xk) = µ and variance Var(Xk) = σ2. Then we can calculate the mean of X̄ as: E (X̄ ) = E ( E (Xk) = µ NB: recall linearity of expectation: E (X + Y ) = E (X ) + E (Y ) and E (aX ) = aE [X ] • We say X̄ is an unbiased estimator of µ since E [X̄ ] = x Distribution of Sample Mean We can calculate the variance of X̄ as: var(X̄ ) = var( NB: recall Var(aX ) = a2Var(X ) and Var(X + Y ) = Var(X ) + Var(Y ) when X , Y independent. • As N increases, the variance of X̄ falls. • Var(NX ) = N2Var(X ) for random variable X . • But when add together independent random variables X1 + X2 + · · · the variance is only NVar(X ) rather than N2Var(X ) • This is due to statistical multiplexing. Small and large values of Xi tend to cancel out for large N. Weak Law of Large Numbers1 Consider N independent identically distributed (i.i.d) random variables X1, · · ·XN each with mean µ and variance σ2. Let X̄ = 1N k=1 Xk . For any � > 0:

P(|X̄ − µ| ≥ �)→ 0 as N →∞

That is, X̄ concentrates around the mean µ as N increases.

• E (X̄ ) = E ( 1

k=1 E (Xk) = µ

• var(X̄ ) = var( 1

• By Chebyshev’s inequality: P(|X̄ − µ| ≥ �) ≤ σ

1There is also a strong law of large numbers, but we won’t deal with that here.

Who cares ?

• Suppose we have an event E
• Define indicator random variable Xi equal to 1 when event E is

observed in trial i and 0 otherwise

• Recall E [Xi ] = P(E ) is the probability that event E occurs.

k=1 Xk is then the relative frequency with which event E is

observed over N experiments.

P(|X̄ − µ| ≥ �)→ 0 as N →∞
tells us that this observed relative frequency X̄ converges to the
probability P(E ) of event E as N grows large.

• So the law of large numbers formalises the intuition of probability as
frequency when an experiment can be repeated many times. But
probability still makes sense even if cannot repeat an experiment
many times – all our analysis still holds.

https://zhuanlan.zhihu.com/p/77312635

Central Limit Theorem (CLT)
Histogram of X̄ = 1

i=1 Xi as N increases, but now we normalise to

keep the area under the curve fixed:

0.4 0.45 0.5 0.55 0.6

• See that (i) curve narrows as n increases, it concentrates as we
already know from weak law of large numbers.

• Curve becomes more “bell-shaped” as N increases – this is the CLT

The Normal (or Gaussian) Distribution

Define the following function

-2 -1 0 1 2

µ = 0, σ = 1

Central Limit Theorem (CLT)
Overlay the Normal distribution, with parameter µ equal to the mean and
σ2 equal to the variance of each of the measured histograms:

0.4 0.45 0.5 0.55 0.6

Sample mean
N(0.5,0.25/N)

N=500,5000,2000

• CLT says that as N increases the distribution of X̄ converges to a
Normal (or Gaussian) distribution.

• Variance → 0 as N →∞, i.e. distribution concentrates around its
mean as N increases.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com