CS代考程序代写 data mining Universal Considerations to Successful Forecasting

Universal Considerations to Successful Forecasting
Zhenhao Gong University of Connecticut

Welcome 2
This course is designed to be:
1. Introductory
2. Leading by interesting questions and applications 3. Less math, useful, and fun!
Most important:
Feel free to ask any questions! ‡
Enjoy! 

Universal considerations 3
Universal considerations for any forecasting task: 􏰀 Forecasting Object
􏰀 Information Set
􏰀 Model Uncertainty and Improvement
􏰀 Forecast Horizon
􏰀 Structural Change
􏰀 Forecast Statement
􏰀 Decision Environment and Loss Function
􏰀 Model Complexity and the Parsimony Principle 􏰀 Unobserved Components

Forecasting Object 4
In business and economics, the forecast object is typically one of three types:
􏰀 Event outcome: an event is certain to take place at a given time but the outcome is uncertain.
􏰀 Event timing: an event is certain to take place and the outcome is known, but the timing is uncertain.
􏰀 Time series: relevant when the future value of a time series is of interest and must be projected. (Most frequently encountered in practice)

Information Set 5
Any forecast we produce is conditional upon the information used to produce it.
􏰀 Univariate: ΩT ={yT,yT−1,…,y1}.
􏰀 Multivariate: ΩT ={yT,xT,yT−1,xT−1,…,y1,x1}.
􏰀 Expert Opinion and Judgment.
􏰀 Information Sets in Forecast Evaluation: whether the forecast could be improved by using a given set of information more efficiently or by using more information.

Model Uncertainty and Improvement 6
All models are false: they are intentional abstractions of a much more complex reality.
􏰀 A model might be useful for certain purposes and poor for others.
􏰀 Models that once worked well may stop working well.
􏰀 One must continually diagnose and assess both empirical
performance and consistency with theory. Remember: It takes a model to beat a model.

The Forecast Horizon 7
The forecast horizon is defined as the number of periods between today and the date of the forecast we make.
􏰀 h-step ahead forecast: the horizon is always fixed at the same value, h.
􏰀 h-step path forecast: the horizon includes all steps from 1-step-ahead to h-steps-ahead.

Structural Change 9
􏰀 In time series, we rely on the future being like the present/past in terms of dynamic relationships.
􏰀 In cross sections, we rely on fitted relationships being relevant for new cases from the original population, and often even for new populations.
But that’s not always true. Structural change can be gradual or abrupt and can affect any or all parameters.

The Forecast Statement 10
For time series forecasts, we must decide if the forecast will be
􏰀 Point forecast: a single number.
􏰀 Interval forecast: a range of number that the future value may fall into.
􏰀 Density forecast: a entire probability distribution for the future value.
Remark: Density forecast > Interval forecast > Point forecasts, in terms of the contained information. Point forecasts are the most commonly used forecasts in practice however.

The Decision Environment 11
The key to generating good and useful forecasts is recognizing that forecasts are made to guide decisions.
Recognition and awareness of the decision making environment is the key to effective design, use and evaluation of forecasting models.

Loss Function 12
We consider loss functions of the form L(e), where e = y − yˆ is the difference between the realization and the previously made forecast. We require L(e) to satisfy three conditions:
􏰀 L(0)=0. Nolossisincurredwhene=0.
􏰀 L(e) is continuous.
􏰀 L(e) is increasing on each side of the origin.
Example: Quadratic loss function: L(e) = e2; absolute error loss: L(e) = |e|; ”linlin” loss: L(e) = a|e| if e > 0 and
L(e) = b|e| if e ≤ 0.

Optimal Forecast 14
The optimal forecast is the forecast with smallest conditionally expected loss.
∗ 􏰑􏰑
yˆ(x) = argminyˆ(x) L(y − yˆ(x))f (y, x)dydx.
Key results:
􏰀 Under quadratic loss, the optimal forecast is the
conditional mean. yˆ(x)∗ = E(y | x).
􏰀 Under absolute loss, the optimal forecast is the conditional
median. yˆ(x)∗ = Q.50·100%(y | x).
􏰀 Under ”lin-lin” loss, the optimal forecast is the conditional d × 100% quantile, where d = b/(a + b).

State-Dependent Loss 15
In some situations, the L(e) form of the loss function is too restrictive. A direction-of-change forecast takes one of two values and its loss function looks like
􏰎 0, if sign(∆y) = sign(∆yˆ) L(y, yˆ) = 1, if sign(∆y) ̸= sign(∆yˆ).
This is one example of a state-dependent loss function, meaning that loss actually depends on the state of the world (y), as opposed to just depending on e.

Model Complexity and the Parsimony Principle 16
Parsimony principle: other things the same, simple models are usually preferable to complex models.
􏰀 We can estimate the parameters of simpler models more precisely.
􏰀 Simpler models are more easily interpreted, understood and scrutinized, anomalous behavior is more easily spotted.
􏰀 More useful in the decision-making process.
􏰀 Enforcing simplicity lessens the scope for ”data mining”.
Remark: Simple models should not be native models: KISS principle: Keep it Sophisticatedly Simple.

Unobserved Components 17
Trend, seasonal, cycle, noise. Deterministic vs. stochastic trend and seasonality.
Or
yt = Tt + St + Ct + εt. yt = Tt × St × Ct × εt.