Predictive Analytics – Week 11: Time Series Forecasting
Predictive Analytics
Week 11: Time Series Forecasting
Semester 2, 2018
Discipline of Business Analytics, The University of Sydney Business School
QBUS2820 content structure
1. Statistical and Machine Learning foundations and applications.
2. Advanced regression methods.
3. Classification methods.
4. Time series forecasting.
2/48
Week 11: Time Series Forecasting
1. Problem definition
2. Time series patterns
3. Simple forecasting methods
4. Model diagnostics
5. Model validation
6. Random walk model
3/48
Time series
A time series is a set of observations y1, y2, . . . , yt ordered in time.
Examples:
• Weekly unit sales of a product.
• Unemployment rate in Australia each quarter.
• Daily production levels of a product.
• Average annual temperature in Sydney.
• 5 minute prices for CBA stock on the ASX.
4/48
Example: visitor Arrivals in Australia
5/48
Example: AUD/USD exchange rate
6/48
Example: assaults in Sydney
7/48
Forecasting
A forecast is a prediction about future events and conditions given
all current information, including historical data and knowledge of
any future events that might impact these events.
The act of making such predictions is called forecasting.
Forecasting informs business and economic decision making,
planning, government policy, etc.
8/48
Examples
• Governments need to forecast unemployment, interest rates,
expected revenues from income taxes to formulate policies.
• Retail stores need to forecast demand to control inventory
levels, hire employees and provide training.
• Banks/investors/financial analysts need to forecast financial
returns, risk or volatility, market ’timing’.
• University administrators need to forecast enrollments to plan
for facilities and for faculty recruitment.
• Sports organisations need to project sports performance,
crowd figures, club gear sales, revenues, etc. in the coming
season.
9/48
Forecasting in business
Different problems lead to different approaches under the umbrella
of forecasting.
• Quantitative (data based) forecasting (our focus in this unit).
• Qualitative (judgmental) forecasting.
• Prediction markets.
• Common approach: judgmentally adjusted statistical
forecasting.
10/48
Problem definition
Forecasting
Our objective is to predict the value of a time indexed response
variable at a future point t+ h, given the observed series until the
present point t. That is, we want to predict Yt+h given
y1, y2, . . . , yt, where h is the forecast horizon.
We can extend this setting to allow for the presence of predictors
x1,x2, . . . ,xt, leading to a dynamic regression problem.
11/48
Decision theory
We denote a point forecast as Ŷt = f(Y1:t). As before, we
assume a squared error loss function:
L(Yt+h, f(Y1:t) = (Yt+h − f(Y1:t))2
We use the slice notation Y1:t as a compact way to write
Y1, . . . , Yt.
12/48
Point forecasting (key concept)
Using the arguments from earlier earlier in the unit, the optimal
point forecast under the squared error loss is the conditional
expectation:
f(Y1:t) = E(Yt+h|Y1:t)
Our objective is therefore to approximate the conditional
expectation of Yt+h given the historical data, possible for multiple
values of h.
13/48
Interval forecasting (key concept)
Uncertainty quantification is an essential for business forecasting.
A density forecast p̂(Yt+h|y1, . . . , yt) is an estimate of the entire
conditional density p(Yt+h|y1, . . . , yt).
An interval forecast is an interval (ŷt+h,L, ŷt+h,U ) such that
P̂ (ŷt+h,L < Yt+h < yt+h,U ) = 1− α.
14/48
Fan chart (key concept)
• For consecutive forecast horizons, construct prediction
intervals for different probability levels (say, 75%, 90%, and
99%) and plot the using different shades.
• The intervals typically get wider with the horizon, representing
increasing uncertainty about future values.
• Fan charts are useful tools for presenting forecasts.
15/48
Example: fan chart
16/48
Time series patterns
Time series patterns (key concept)
We interpret a time series as
Yt = f(Tt, St, Ct, Et),
where Tt is the trend component, St is the seasonal component, Ct
is the cyclic component, and Et is an irregular or error component.
Trend. The systematic long term increase or decrease in the series.
Seasonal. A systematic change in the mean of the series due to
seasonal factors (month, day of the week, etc).
Cyclic. A cyclic pattern exists when there are medium or long run
fluctuations in the time series that are not of a fixed period.
Irregular. Short term fluctuations and noise.
17/48
Examples: time series patterns
18/48
Example: cyclic series
19/48
Time series models
Time series models can be additive or multiplicative.
Additive: Yt = Tt + St + Et
Multiplicative: Yt = Tt × St × Et
20/48
Log transformation
When the time series displays a multiplicative behaviour
Yt = Tt × St × Et,
we usually apply to log transformation to obtain a more convenient
additive specification
log Yt = log Tt + logSt + logEt.
21/48
Choosing an additive or multiplicative specification
Is the seasonal variation proportional to the trend?
• If yes, a multiplicative model is more adequate.
• If not, we use an additive model.
22/48
Time series decomposition
Time series decomposition methods are algorithms for splitting
a time series into different components, typically for purposes of
seasonal adjustment.
In the context of forecasting, decomposition methods are useful
tools for exploratory data analysis, allowing us to visualise patterns
in the data.
23/48
Time series decomposition: visitor arrivals
24/48
Example: seasonal adjustment and trend extraction
25/48
Simple forecasting methods
Random walk
The random walk method (called the näıve method in the book)
forecasts the series using the value of the last available observation:
ŷt+h = yt
26/48
Seasonal random walk
For time series with seasonal patterns, we can extend the random
walk method by forecasting the series with the value of the last
available observation in the same season:
ŷt+h = yt+h−m (if h ≤ m),
where m is the seasonal period. For example, m = 12 and m = 4
for monthly and quarterly data respectively.
The general formula is
ŷt+h = yt+h−km, k = b(h− 1)/m+ 1c.
27/48
Drift method
The drift method forecasts the series as the sum of the most
recent value (as in the näıve method) and the average change over
time:
ŷt+1 = yt +
t∑
i=2
yi − yi−1
t− 1
ŷt+h = yt + h×
t∑
i=2
yi − yi−1
t− 1
28/48
Model diagnostics
Autocorrelation (key concept)
The autocorrelation of a time series process is
ρk =
E [(Yt − µ)(Yt−k − µ)]
σ2
= Corr(Yt, Yt+k),
where k is the lag, and µ and σ2 are the mean and variance of the
time series (assuming that they do not depend on t).
The sample autocorrelation is
rk =
∑T−k
t=1 (yt+k − y)(yt − y)∑T
t=1(y − y)2
.
The autocorrelation function (ACF) plot displays the
autocorrelation for a range of lags.
29/48
White noise process (key concept)
A white noise process is a sequence of independently and
identically distributed random variables with mean 0 and finite
variance σ2.
If a time series model is well specified, we expect the residual series
of the fitted model to behave like a white noise process.
30/48
Model diagnostics (key concept)
Residual plot. The presence of patterns in the time series of
residuals (such as non-constant variance over time) may suggest
assumption violations and the need for alternative models.
Residual ACF plot. Well specified models should lead to small
and insignificant sample autocorrelations, consistent with a white
noise process.
Residual distribution plots (histogram, KDE, Q-Q plots, etc).
Inspecting the distribution of the residuals will suggest the
appropriate assumptions for interval forecasting.
31/48
Model validation
Training and validation sets
• We incorporate model validation into the forecasting process
by setting aside a validation sample for estimating and
comparing the performance of different models.
• We allocate the last part of the sample (typically 20-50% of
the data) to the validation set.
• In time series, the training set is called “in-sample data” and
the validation set the “out-of-sample data”.
• Due to the dynamic nature of forecasting, there is no test set
(though we may sometimes refer to model validation as
forecast evaluation).
32/48
Real time forecasts (key concept)
We validate forecasts by following the “real time” approach: at
every period t, we use all the available data at present to estimate
the model and predict the future value of the series.
1. Starting at t = n, use the observations at times 1, 2, . . . , t to
estimate the forecasting model. Use the estimated model to
forecast the observation at time t+ 1.
2. Repeat the above step for t = n+ 1, . . . , T − 1.
3. Compute forecast accuracy measures based on the prediction
errors yn+1 − ŷn+1, . . . , yT − ŷT .
We follow a similar procedure for multi-step forecasts.
33/48
Expanding and rolling windows
We can consider two schemes for updating the estimation sample
at each validation period.
Expanding window. At each step, add the latest observation to
the the estimation sample.
Rolling window. At each step, use only the most recent n
observations for estimation. The rolling window scheme implicitly
assumes that the dynamics of the series has a time changing
nature, so that data far in the past are less relevant for estimation.
34/48
Measuring forecast accuracy
We typically assume the squared error loss and compute the
out-of-sample MSE to measure forecast accuracy.
However, it is useful to be familiar with other measures that are
common in business forecasting:
• Percentage errors.
• Scaled errors.
35/48
Percentage errors
• The percentage error is given by pt = 100× ((yt − ŷt)/yt). It
has the advantage of being scale-independent.
• The most commonly used measure is mean absolute
percentage error
MAPE = mean(|pt|).
• Measures based on percentage errors have the disadvantage of
being infinite or undefined if yt = 0 for any t in the period of
interest, and having extreme values when any yt = 0 is close
to zero.
• Percentage errors are only valid under a meaningful zero.
36/48
Scaled errors
• Hyndman and Koehler (2006) proposed scaling the errors
based on the training MAE (or MSE) from a benchmark
method (typically a simple model).
• For a non-seasonal time series, a useful way to define a scaled
error uses näıve forecasts:
qt =
yt − ŷt
1
T − 1
T∑
i=2
|yi − yi−1|
.
• Because the numerator and denominator both involve values
on the scale of the original data, qj is independent of the scale
of the data.
37/48
Mean absolute scaled error
The mean absolute scaled error is
MASE = mean(|qt|).
A scaled error is less than one if it arises from a better set of
forecasts than the random walk method evaluated on the training
data.
38/48
Example: Quarterly Australian Beer Production
The figure shows shows three forecasting methods applied to the
quarterly Australian beer production using data to the end of 2005.
We compute the forecast accuracy measures for 2006-2008.
39/48
Example: Quarterly Australian Beer Production
Method RMSE MAE MAPE MASE
Mean method 38.01 33.78 8.17 2.30
Näıve method 70.91 63.91 15.88 4.35
Seasonal näıve method 12.97 11.27 2.73 0.77
It is clear from the graph that the seasonal naive method is best
for the data, although it can still be improved.
40/48
Random walk model
Random walk model (key example)
In this section, we use the random walk method to illustrate how
to obtain point and interval forecasts for multiple horizons based
on a time series model.
We assume the model
Yt = Yt−1 + εt,
where εt is i.i.d with constant variance σ2.
41/48
Random walk model
Since Yt = Yt−1 + εt, we can use back substitution to show that
Yt+1 = Yt + εt+1
Yt+2 = Yt+1 + εt+2
= Yt + εt+1 + εt+2
...
Yt+h = Yt+h−1 + εt+h
= Yt + εt+1 + . . .+ εt+h
42/48
Point forecast
Yt+h = Yt +
h∑
i=1
εt+i
Therefore, we obtain the point forecast for any horizon as
ŷt+h = E(Yt+h|y1:t)
= E
(
Yt +
h∑
i=1
εt+i
∣∣∣∣∣ y1:t
)
= yt
43/48
Forecast
The conditional variance is
Var(Yt+h|y1:t) = Var(yt +
h∑
i=1
εt+i|y1:t)
= hσ2.
For density forecasting, we need to make further assumptions
about the errors. If we assume that εt ∼ N(0, σ2),
Yt+h|y1:t ∼ N
(
yt, hσ
2
)
.
44/48
Forecast interval
Under the Gaussian assumption,
Yt+h|y1:t ∼ N
(
yt, hσ
2
)
,
leading to the forecast interval
yt ± zα/2 ×
√
hσ̂2,
where
σ̂2 =
∑T
t=2(yt − yt−1)2
T − 1
,
and zα/2 is the appropriate critical value from the normal
distribution.
45/48
Example: USD/AUD exchange rate
46/48
Forecast interval
Forecast interval based on the assumption of normal errors:
yt ± zα ×
√
hσ̂2
• This forecast interval is based on the plug-in method, as we
replace the unknown σ2 with an estimate.
• The plug in method is a standard approach, but you should be
aware that it ignores parameter uncertainty, leading to
prediction intervals that are too narrow.
• If the errors are not Gaussian, you should use other methods
such as the Bootstrap algorithm (not in the scope of our unit).
47/48
Review questions
• What is point and interval forecasting?
• What are the four time series components?
• Which diagnostics do we use for univariate time series models,
and why?
• How to we conduct model validation for forecasting?
• How do we compute forecasts and prediction intervals for the
random walk model?
48/48
Problem definition
Time series patterns
Simple forecasting methods
Model diagnostics
Model validation
Random walk model