CS代考 UNSW 10 / 23

Financial Econometrics – Slides-04: ARMA models Genera; Linear Process and characterization

Copyright©Copyright University of Wales 2020. All rights reserved.
Course materials subject to Copyright

Copyright By PowCoder代写 加微信 powcoder

UNSW Sydney owns copyright in these materials (unless stated otherwise). The material is subject to copyright under Australian law and overseas under
international treaties. The materials are provided for use by enrolled UNSW students. The materials, or any part, may not be copied, shared or distributed,
in print or digitally, outside the course without permission. Students may only copy a reasonable portion of the material for personal research or study or for
criticism or review. Under no circumstances may these materials be copied or reproduced for sale or commercial purposes without prior written permission
of UNSW Sydney.
Statement on class recording
To ensure the free and open discussion of ideas, students may not record, by any means, classroom lectures, discussion and/or activities without the
advance written permission of the instructor, and any such recording properly approved in advance can be used solely for the student?s own private use.
WARNING: Your failure to comply with these conditions may lead to disciplinary action, and may give rise to a civil action or a criminal offence under the
THE ABOVE INFORMATION MUST NOT BE REMOVED FROM THIS MATERIAL.

Dr. School of Economics (UNSW) Slides-04 ©UNSW 1/ 23

Financial Econometrics
Slides-04: ARMA models

Genera; Linear Process and characterization

School of Economics1

1©Copyright University of Wales 2020. All rights reserved. This copyright notice must not
be removed from this material.

Dr. School of Economics (UNSW) Slides-04 ©UNSW 2/ 23

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Time Series Models (Mainly Theoretical Aspects)

• View time series as stochastic processes
• Notions of stationarity (Covariance Stationary)
• Models for stationary time series

• General linear process (GLP): useful representation especially for computing
expectations…

• Characteristics of models
• Patterns in the AC and PAC of a model: White noise

Dr. School of Economics (UNSW) Slides-04 ©UNSW 3/ 23

Motivation

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

� Describe empirically relevant patterns in the data ??
� Obtain the distribution of future values, conditional on the past, in order to

forecast the future values and evaluate the likelihood of certain events ??

� Provide insight in possible sources of non-stationarity

Dr. School of Economics (UNSW) Slides-04 ©UNSW 4/ 23

Characteristics of a Time Series

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Univariate Time Series Analysis: ARIMA models

Introduction

Characteristics of a Time Series

A time series y1, . . . , yT is a sequence of values a specific variable
y has taken on at equal distances (e.g. daily, quarterly, yearly, …)
over some period of time.

These observations will be considered as being generated by some
stochastic Data Generating Process (DGP)

I A time series y1, . . . , yT is generated by a stochastic process
yt , for t = 1, . . . , T .

I A time series y1, . . . , yT is a collection of realizations of a
random variable yt ordered in time.

Dr. School of Economics (UNSW) Slides-04 ©UNSW 5/ 23

Univariate Time Series Models

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Univariate Time Series Analysis: ARIMA models

Introduction

Univariate Time Series Models

A time series model tries to describe the stochastic process yt by
a relatively simple model. Univariate time series models are a
class of models where one attempts to model and predict
(economic) variables using only information contained in their own
past values and possibly current and past values of an error term.

These models are (mainly) a-theoretical:

I not based upon any underlying theoretical model

I attempt to capture empirically relevant patterns in the data

, Structural models
I generally based upon any underlying theoretical model

I attempt to model a variable from the current and/or past
values of other explanatory variables (suggested by theory)

Dr. School of Economics (UNSW) Slides-04 ©UNSW 6/ 23

Defining stationarity and non-stationarity

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Univariate Time Series Analysis: ARIMA models

Building ARIMA models

Stationary versus Non-stationary Stochastic Processes

Defining stationarity and non-stationarity

A series yt is strictly stationary if the distribution of its values is
not a↵ected by an arbitrary shift along the time axis:

f (yt) = f (yt+k) 8k (1)

! The entire distribution of yt is not a↵ected by an arbitrary shift
along the time axis. See for example Figure 1.

A series yt is covariance or weakly stationary if it satisfies:

I E (yt) = µ < 1 I Var (yt) = E (yt � µ)2 = �2 < 1 I Cov (yt , yt�k) = E (yt � µ) (yt�k � µ) = �k 8k ! The first and the second moment of the distribution of yt are finite and not a↵ected by an arbitrary shift along the time axis. Dr. School of Economics (UNSW) Slides-04 ©UNSW 7/ 23 Defining stationarity and non-stationarity ©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material Univariate Time Series Analysis: ARIMA models Building ARIMA models Stationary versus Non-stationary Stochastic Processes I After being hit by a shock, a stationary series tends to return to its mean (called mean reversion) and fluctuations around this mean (measured by the variance) will have a broadly constant amplitude. I If a time series is not stationary in the sense defined above, it is called non-stationary, i.e. non-stationary series will have a time-varying mean and/or a time-varying variance and/or time-varying covariances. I Non-stationarity can have di↵erent sources: linear trend, structural break, unit root, ... Dr. School of Economics (UNSW) Slides-04 ©UNSW 8/ 23 Defining stationarity and non-stationarity ©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material Univariate Time Series Analysis: ARIMA models Building ARIMA models Stationary versus Non-stationary Stochastic Processes Figure 5 : A non-stationary process (structural break) Dr. School of Economics (UNSW) Slides-04 ©UNSW 9/ 23 Stationary Time Series ©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material � If the dependence structure is stable (stationary), it can be learned from historical data. � Strict Stationarity • A time series is strictly stationary (SS) if its joint distribution at any set of points in time is invariant to any time shift. eg. dist(yt1 , yt2) = dist(yt1+s, yt2+s) � Covariance Stationarity • A time series is covariance stationary (CS) if its mean, variance and autocovariance are all independent of the time index t, and its variance is E(yt) = µ, V ar(yt) = γ0 <∞, Cov(yt, yt−j) = γj for all j Dr. School of Economics (UNSW) Slides-04 ©UNSW 10 / 23 Autocorrelation and Partial Autocorrelation Function ©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material Univariate Time Series Analysis: ARIMA models Building ARIMA models Autocorrelation and Partial Autocorrelation Function Autocorrelation and Partial Autocorrelation Function Assuming covariance stationarity, particular useful tools when building ARMA models are the so-called Autocorrelation and Partial Autocorrelation Function. In general, the joint distribution of all values of yt is characterised by the so-called autocovariances, i.e. the covariances between yt and all of its lags yt�k . The sample autocovariances �k can be obtained as �k = cov(yt , yt�k), k = 1, 2, . . . (2) (yt � y) (yt�k � y), (3) where y = T�1 t=1 yt is the sample mean. Dr. School of Economics (UNSW) Slides-04 ©UNSW 11 / 23 Autocorrelation and Partial Autocorrelation Function ©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material Univariate Time Series Analysis: ARIMA models Building ARIMA models Autocorrelation and Partial Autocorrelation Function As the autocovariances are not independent of the units in which the variables are measured, it is common to standardize by defining autocorrelations ⇢k as cov (yt , yt�k) Note that ⇢0 = 1 and �1  ⇢k  1. The autocorrelations ⇢k considered as a function of k are referred to as the autocorrelation function (ACF) or correlogram of the series yt . The ACF provides useful information on the properties of the DGP of a series as it describes the dependencies among observations. Dr. School of Economics (UNSW) Slides-04 ©UNSW 12 / 23 Autocorrelation Function ©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material Univariate Time Series Analysis: ARIMA models Building ARIMA models Autocorrelation and Partial Autocorrelation Function If the data are generated from a stationary process, it can be shown that under the null hypothesis: H0 : ⇢k = 0 8k > 0

the sample autocorrelation coe�cients are asymptotically
normally distributed with mean zero and variance 1

Therefore, in finite sample it holds:

The individual significance of an autocorrelation coe�cient can
be tested by constructing the 95% confidence interval:

! see dashed lines in Figures 6 and 7.
Dr. School of Economics (UNSW) Slides-04 ©UNSW 13 / 23

Autocorrelation Function

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Univariate Time Series Analysis: ARIMA models

Building ARIMA models

Autocorrelation and Partial Autocorrelation Function

Looking at a large number of autocorrelations, we will see that
some exceed two standard deviations as a result of pure chance
even though the true values in the DGP are zero (Type I error).

The joint significance of a group of m autocorrelation coe�cients
can be tested by the so-called Box-Pierce Q-statistic:

If the data are generated from a stationary process, Q is
asymptotically �2 distributed with m degrees of freedom.

Superior small sample performance is obtained by modifying the q
statistic (reported in EViews output):

Q⇤ = T (T + 2)

⇢2k/(T � k) (6)

Dr. School of Economics (UNSW) Slides-04 ©UNSW 14 / 23

Partial Autocorrelation Function

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

� An alternative piece of information is provided by the so-called partial
autocorrelation function (PACF). The partial autocorrelation pj s the
correlation between yt and yt?k conditional on yt?1, · · · , yt?k+1. It measures
the dependency between yt and yt?k keeping constant in-between values.

� The sample partial autocorrelations can be calculated from OLS regressions:
• p̂1 = φ̂11 in yt = φ10 + φ11yt−1 + e1t
• p̂2 = φ̂22 in yt = φ20 + φ21yt−1 + φ22yt−2 + e2t
• p̂3 = φ̂33 in yt = φ30 + φ31yt−1 + φ32yt−2 + phi33yt−2 + e3t

Dr. School of Economics (UNSW) Slides-04 ©UNSW 15 / 23

Defining a White Noise Process

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Univariate Time Series Analysis: ARIMA models

Building ARIMA models

White Noise Process

Defining a White Noise Process

A series yt is called a white noise process if its DGP has a
constant mean, a constant variance and is serially uncorrelated.

E (yt) = E (yt�1) = … = µ

Var (yt) = Var (yt�1) = … = �

Cov (yt , yt�k) = Cov (yt�j , yt�j�k) =

�2 if k = 0
0 otherwise

yt is a zero-mean white noise process if µ = 0.

Dr. School of Economics (UNSW) Slides-04 ©UNSW 16 / 23

Defining a White Noise Process

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

� A time series �t is a white noise if its is covariance stationary with zero mean
and no autocorrelation.
• By definition:

E(�t) = 0, V ar(�t) = σ
, Cov(�j , �t−j) = 0, for allj 6= 0.

• A white noise is denoted as: yt ∼WN(0, σ2)
A white noise is not necessarily i.i.d (independent and identically distributed)

• An i.i.d white noise is denoted as: i.i.d WN(0, σ2)
• , White noises are building blocks of time series models.

Dr. School of Economics (UNSW) Slides-04 ©UNSW 17 / 23

Test whether a time series is white noise

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

• Key feature of a WN(0, σ2) is H0: no autocorrelation
• The sampling distribution of the ACF and PACF for a WN is approximately
N(0, 1/T )

B Reject H0
if either ACF or PAC is outside the ±1.96/

T bands; or

the Ljung-Box Q-stats have small p-values.

Dr. School of Economics (UNSW) Slides-04 ©UNSW 18 / 23

Test whether a time series is white noise

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

eg. NYSE Composite return squared r2t .

Topic 3. Time Series Models

• White noise process
– Test whether a time series is white noise

• Key feature of a WN is 𝐻𝐻0: no autocorrelation
• The sampling distribution of AC and PAC for a WN is

approximately N(0,1/T).
• Reject 𝐻𝐻0
if either AC or PAC is outside the ±1.96/ 𝑇𝑇 bands; or
if the Ljung-Box Q-stats (Slides-01) have small p-values.

eg. NYSE Composite return squared

School of Economics, UNSW Slides-04, Financial Econometrics 8

Dr. School of Economics (UNSW) Slides-04 ©UNSW 19 / 23

General linear Process

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

� Why general linear process?
B Wold decomposition. Any covariance stationary process can be expressed as

a general linear process.

bi�t−i, �t ∼WN(0, σ2)

• Because bi → 0 as i→∞,it is possible to use finite parameters to characterise
CS time series. This leads to practical (parsimonious) models (ARMA).

� Will mainly consider the cases with iid WN in this topic, for which
”conditional”=”unconditional”
• E(�t|�t−j) = E(�t) (�t is not predictable)
• V ar(�t|�t−j) = V ar(�t) for all j = 1, 2, , 3, · · ·

Dr. School of Economics (UNSW) Slides-04 ©UNSW 20 / 23

General linear Process

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

� Conditional Expectations
• The general linear process with i.i.d WN :

bi�t−i, �t ∼WN(0, σ2)

• Let Ωt be the information set based on

{yt, yt−1, · · · , �t, �t− 2, · · ·}

� Conditional mean and variance of yt+h for h = 1, 2, · · · :
• E(yt+h|Ωt) = µ+

• V ar(yt+h|Ωt) = σ2

• What happens when h→∞?
B Limited memory: info at t is not relevant to remote future.

Dr. School of Economics (UNSW) Slides-04 ©UNSW 21 / 23

General linear Process: Conditional Expectations

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Topic 3. Time Series Models

• General linear process
– Conditional expectations

� 𝑦𝑦𝑡𝑡+ℎ = 𝜇𝜇 + 𝑏𝑏0𝜀𝜀𝑡𝑡+ℎ + ⋯+ 𝑏𝑏ℎ−1𝜀𝜀𝑡𝑡+1
not in Ω𝑡𝑡

+ 𝑏𝑏ℎ𝜀𝜀𝑡𝑡 + 𝑏𝑏ℎ+1𝜀𝜀𝑡𝑡−1 + ⋯

� 𝐸𝐸 𝑦𝑦𝑡𝑡+ℎ Ω𝑡𝑡 = 𝜇𝜇 + ∑ 𝑏𝑏𝑖𝑖𝜀𝜀𝑡𝑡+ℎ−𝑖𝑖

� Var 𝑦𝑦𝑡𝑡+ℎ Ω𝑡𝑡 = 𝜎𝜎2 ∑ 𝑏𝑏𝑖𝑖

eg. When ℎ = 2,

� 𝑦𝑦𝑡𝑡+2 = 𝜇𝜇 + 𝑏𝑏0𝜀𝜀𝑡𝑡+2 + 𝑏𝑏1𝜀𝜀𝑡𝑡+1
not in Ω𝑡𝑡

+ 𝑏𝑏2𝜀𝜀𝑡𝑡 + 𝑏𝑏3𝜀𝜀𝑡𝑡−1 + ⋯

� 𝐸𝐸 𝑦𝑦𝑡𝑡+2 Ω𝑡𝑡 = 𝜇𝜇 + 𝑏𝑏2𝜀𝜀𝑡𝑡 + 𝑏𝑏3𝜀𝜀𝑡𝑡−1 + ⋯ ,
� Var 𝑦𝑦𝑡𝑡+2 Ω𝑡𝑡 = 𝜎𝜎2(𝑏𝑏0

• Conditional variance is smaller than unconditional.

Variance being constant, not ideal to capture the
“clustering” in return series. Need ARCH-type models.

School of Economics, UNSW Slides-04, Financial Econometrics 12

250 500 750 1000 1250 1500 1750

𝜀𝜀𝑡𝑡+1 = 1-step forecast error

Conditional variance is smaller than unconditional variance. Variance being
constant, not ideal to capture the clustering in return series. Need ARCH-type

Dr. School of Economics (UNSW) Slides-04 ©UNSW 22 / 23

General linear Process: Forecast Based on Ωt

©Copyright University of Wales 2020. All rights reserved. This copyright notice must not be removed from this material

Topic 3. Time Series Models

• General linear process (GLP)
– Forecast based on Ω𝑡𝑡

• Use the information set Ω𝑡𝑡 to forecast 𝑦𝑦𝑡𝑡+ℎ for ℎ ≥ 1.
Let 𝑓𝑓𝑡𝑡+ℎ|𝑡𝑡 be the forecast based on Ω𝑡𝑡.

• Choose 𝑓𝑓𝑡𝑡+ℎ|𝑡𝑡 to minimise the MSFE

MSFE = 𝐸𝐸[ 𝑦𝑦𝑡𝑡+ℎ − 𝑓𝑓𝑡𝑡+ℎ|𝑡𝑡

• The optimal point forecast is

∗ = 𝐸𝐸 𝑦𝑦𝑡𝑡+ℎ Ω𝑡𝑡 .

• If 𝜇𝜇, 𝑏𝑏𝑖𝑖, 𝜎𝜎2 are known, the 2-se interval forecast is
𝐸𝐸 𝑦𝑦𝑡𝑡+ℎ Ω𝑡𝑡 ± 2 Var 𝑦𝑦𝑡𝑡+ℎ Ω𝑡𝑡 or

(𝜇𝜇 + ∑ 𝑏𝑏𝑖𝑖𝜀𝜀𝑡𝑡+ℎ−𝑖𝑖
𝑖𝑖=ℎ ) ± 2 𝜎𝜎

School of Economics, UNSW Slides-04, Financial Econometrics 13

Forecast error:
𝑒𝑒𝑡𝑡+ℎ|𝑡𝑡 = 𝑦𝑦𝑡𝑡+ℎ − 𝑓𝑓𝑡𝑡+ℎ|𝑡𝑡

Var 𝑒𝑒𝑡𝑡+ℎ|𝑡𝑡 Ω𝑡𝑡 = Var 𝑦𝑦𝑡𝑡+ℎ Ω𝑡𝑡 .

Dr. School of Economics (UNSW) Slides-04 ©UNSW 23 / 23

Summary: What to take from this lecture?

1 White noise is the building block of time series models

2 In order to model the dynamics of a time series, use the white noise process
to piece together the dynamics: GLP

3 GLP useful representation: compute expectations, variance and ACF

4 Special models: AR, MA

Dr. School of Economics (UNSW) Slides-04 ©UNSW 24 / 23

Introduction
Building ARMA Models
Stationary versus Non-stationary Stochastic Processes
Autocorrelation and Partial Autocorrelation Function

White Noise Process
Test and example

General linear Process

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com