Session 12 Non-parametric Approach 2
Tim Bailey Nottingham University Business School
Outline
Variations on HS approach
Outline
Variations on HS approach Age weighted HS (AW-HS)
Outline
Variations on HS approach
Age weighted HS (AW-HS)
Volatility Adjusted HS (Hull-White)
Outline
Variations on HS approach
Age weighted HS (AW-HS)
Volatility Adjusted HS (Hull-White)
Both approaches address the issue of older data not being representative of current risk climate.
Outline
Variations on HS approach
Age weighted HS (AW-HS)
Volatility Adjusted HS (Hull-White)
Both approaches address the issue of older data not being representative of current risk climate.
AW-HS asserts that more recent losses are more likely to occur tomorrow
Outline
Variations on HS approach
Age weighted HS (AW-HS)
Volatility Adjusted HS (Hull-White)
Both approaches address the issue of older data not being representative of current risk climate.
AW-HS asserts that more recent losses are more likely to occur tomorrow
Hull-White ensures the volatility of returns reflect current conditions, through volatility adjustment. So HS risk measures will reflect currrent market conditions.
Age weighting
To see how this works, first we show how to perform Basic HS using spreadsheet style operations.
Age weighting
To see how this works, first we show how to perform Basic HS using spreadsheet style operations.
Calculate losses for each day (or week ,etc)
Age weighting
To see how this works, first we show how to perform Basic HS using spreadsheet style operations.
Calculate losses for each day (or week ,etc) Calculate the probability of each loss
Age weighting
To see how this works, first we show how to perform Basic HS using spreadsheet style operations.
Calculate losses for each day (or week ,etc)
Calculate the probability of each loss
Order the data by loss
Age weighting
To see how this works, first we show how to perform Basic HS using spreadsheet style operations.
Calculate losses for each day (or week ,etc)
Calculate the probability of each loss
Order the data by loss
Cumulate the probabilities to identify the worst 5% of
losses.
Basic HS setup
Set up the data frame, using the FTSE data.
d <- EuStockMarkets[,4]
ft <- diff(d)/lag(d, k = -1)
P <- 1000
n <- length(ft)
df <- data.frame(ft, loss = -P*ft, hsw = 1/n) head(df, n=4)
## # A tibble: 4 x 3
## ft loss hsw
## *
## 1 0.00679 -6.79 0.000538
## 2 -0.00488 4.88 0.000538
## 3 0.00907 -9.07 0.000538
## 4 0.00579 -5.79 0.000538
hsw is the equally likely probability of each loss occurring tomorrow.
VaR
Now we sort the everything by the losses, and cumulate the probabilities to find the worst 5% of losses
df <- df[order(df$loss, decreasing = TRUE),] df$cumhsw <- with(df, cumsum(hsw))
df[1:5,]
## # A tibble: 5 x 4
## ft loss hsw cumhsw
## *
## 1 -0.0406 40.6 0.000538 0.000538
## 2 -0.0307 30.7 0.000538 0.00108
## 3 -0.0306 30.6 0.000538 0.00161
## 4 -0.0287 28.7 0.000538 0.00215
## 5 -0.0277 27.7 0.000538 0.00269
Looking in the cumhsw column for 0.05. There is some way to go yet.
VaR
df[6:16,]
## # A tibble: 11 x 4
## ft loss hsw cumhsw
## *
## 1 -0.0261 26.1 0.000538 0.00323
## 2 -0.0251 25.1 0.000538 0.00377
## 3 -0.0241 24.1 0.000538 0.00430
## 4 -0.0233 23.3 0.000538 0.00484
## 5 -0.0231 23.1 0.000538 0.00538
## 6 -0.0225 22.5 0.000538 0.00592
## 7 -0.0224 22.4 0.000538 0.00646
## 8 -0.0223 22.3 0.000538 0.00699
## 9 -0.0221 22.1 0.000538 0.00753
## 10 -0.0218 21.8 0.000538 0.00807
## 11 -0.0215 21.5 0.000538 0.00861
VaR
Eventually
df[92:95,]
## # A tibble: 4 x 4
## ft loss
## *
## 1 -0.0125 12.5 0.000538 0.0495
## 2 -0.0125 12.5 0.000538 0.0500
## 3 -0.0125 12.5 0.000538 0.0506
## 4 -0.0125 12.5 0.000538 0.0511
So we see 95% VaR is about 12.5 by this method
We can do something like the following to identify ‘automatically’ the loss for the quantile closest to the 95% quantile.
hsw cumhsw
alpha <- 0.05
row.idx <- which.min(abs(df$cumhsw - alpha)) df[row.idx, 'loss']
## [1] 12.5
ES
We can find this analagously, using the probability weights
VaR <- df[row.idx, 'loss']
df2 <- df[df$loss > VaR,]
df2$nw <- with(df2, hsw/sum(hsw)) with(df2, sum(nw * loss))
## [1] 16.82
Note that we have reweighted the tail probabilities so they add to one.
Of course, this works too
with(df2, mean(loss)) ## [1] 16.82
but the former approach makes it easier to follow the Age-weighted approach discussed later.
Age-weighted HS
Dowd’s book covers Age-weighted Historical Simulation.
Basic HS assumes each loss in the past is equally likely to occur tomorrow
Age-weighted HS
Dowd’s book covers Age-weighted Historical Simulation.
Basic HS assumes each loss in the past is equally likely to occur tomorrow
AW HS says the recent past is more likely to recurr
Age-weighted HS
Dowd’s book covers Age-weighted Historical Simulation.
Basic HS assumes each loss in the past is equally likely to occur tomorrow
AW HS says the recent past is more likely to recurr
Hence more recent losses are given higher probabilities
Age-weighted HS
Dowd’s book covers Age-weighted Historical Simulation.
Basic HS assumes each loss in the past is equally likely to occur tomorrow
AW HS says the recent past is more likely to recurr
Hence more recent losses are given higher probabilities
Weights given by
λi−1(1 − λ) 1 − λn
where λ is a constant (0.98 often suggested by authors) and n is the sample size.
wi =
Weights
lam = 0.95 lam = 0.98
0 20 40 60 80 100
age
0.00 0.01 0.02
0.03 0.04 0.05
w1
Basic HS setup
d <- EuStockMarkets[,4]
ft <- diff(d)/lag(d, k=-1)
P <- 1000; ci <- 0.95
alpha <- 1-ci
n <- length(ft)
df <- data.frame(age = seq(n,1), ft, loss = -P*ft,
head(df)
hsw = 1/n)
## # A tibble: 6 x 4
## age ft loss hsw
## *
## 1 1859 0.00679 -6.79 0.000538
## 2 1858 -0.00488 4.88 0.000538
## 3 1857 0.00907 -9.07 0.000538
## 4 1856 0.00579 -5.79 0.000538
## 5 1855 -0.00720 7.20 0.000538
## 6 1854 0.00855 -8.55 0.000538
Add probability weights depending on age
‘Ancient’ losses are unlikely to recur.
lam <- 0.98
df$w <- with(df,((1-lam)*lam^(age-1))/(1-lam^n)) head(df)
## # A tibble: 6 x 5
## age ftloss hsw w ## *
Probabilities of more recent losses
Here we see a probability of 2% attached to the most recent loss.
tail(df)
## # A tibble: 6 x 5
## age ft loss hsw w
##*
##1 6
##2 5
##3 4
##4 3
##5 2
##6 1
0.0154 -15.4 0.000538 0.0181
-0.0163 16.3 0.000538 0.0184
-0.0277 27.7 0.000538 0.0188
0.00541 -5.41 0.000538 0.0192
-0.0115 11.5 0.000538 0.0196
0.0103 -10.3 0.000538 0.0200
VaR
Now we sort the everything by the losses, and cumulate the probabilities to find the worst 5% of losses
df <- df[order(df$loss, decreasing = TRUE),] df$cumw <- with(df, cumsum(w))
df$cumhsw <- with(df, cumsum(hsw))
df[1:5,]
## # A tibble: 5 x 7
## age ft loss hsw w cumw
## *
## 1 1530 -0.0406 40.6 0.000538 7.69e-16 7.69e-16 0.000538
## 2 1825 -0.0307 30.7 0.000538 1.98e-18 7.71e-16 0.00108
## 3 ## 4 ## 5
212 -0.0306 30.6 0.000538 2.82e- 4 2.82e- 4 0.00161
171 -0.0287 28.7 0.000538 6.45e- 4 9.27e- 4 0.00215
4 -0.0277 27.7 0.000538 1.88e- 2 1.98e- 2 0.00269
cumhsw
VaR
df[6:12,]
## # A tibble: 7 x 7
## age ft loss hsw w cumw cumhsw
## *
## 1
## 2
## 3
## 4
## 5 1560 -0.0231 23.1 0.000538 4.19e-16 0.0384 0.00538
## 6 47 -0.0225 22.5 0.000538 7.90e- 3 0.0463 0.00592
## 7 1081 -0.0224 22.4 0.000538 6.69e-12 0.0463 0.00646
210 -0.0261 26.1 0.000538 2.93e- 4 0.0200 0.00323
261 -0.0251 25.1 0.000538 1.05e- 4 0.0201 0.00377
80 -0.0241 24.1 0.000538 4.05e- 3 0.0242 0.00430
18 -0.0233 23.3 0.000538 1.42e- 2 0.0384 0.00484
VaR
Eventually
df[20:25,]
## # A tibble: 6 x 7
## age ft loss hsw w cumw cumhsw
## *
## 1 1678 -0.0204 20.4 0.000538 3.87e-17 0.0466 0.0108
## 2 1246 -0.0201 20.1 0.000538 2.39e-13 0.0466 0.0113
## 3 156 -0.0189 18.9 0.000538 8.73e- 4 0.0475 0.0118
## 4 177 -0.0185 18.5 0.000538 5.71e- 4 0.0481 0.0124
## 5 8 -0.0181 18.1 0.000538 1.74e- 2 0.0654 0.0129
## 6 544 -0.0178 17.8 0.000538 3.44e- 7 0.0654 0.0134
So we see 95% VaR is about 18.5 by this method
VaR
Eventually
df[20:25,]
## # A tibble: 6 x 7
## age ft loss hsw w cumw cumhsw
## *
## 1 1678 -0.0204 20.4 0.000538 3.87e-17 0.0466 0.0108
## 2 1246 -0.0201 20.1 0.000538 2.39e-13 0.0466 0.0113
## 3 156 -0.0189 18.9 0.000538 8.73e- 4 0.0475 0.0118
## 4 177 -0.0185 18.5 0.000538 5.71e- 4 0.0481 0.0124
## 5 8 -0.0181 18.1 0.000538 1.74e- 2 0.0654 0.0129
## 6 544 -0.0178 17.8 0.000538 3.44e- 7 0.0654 0.0134
So we see 95% VaR is about 18.5 by this method
With Basic HS we are only at the 98.8% quantile at this value!
ES
We can find this analagously, using the probability weights
df2 <- df[df$loss > 18.5,] df2$nw <-with(df2, w/sum(w)) with(df2, sum(nw * loss))
## [1] 25.03
Note the need to reweight the tail probabilities so they add to one.
Age-weighted HS function
HS_aw <- function(r, P = 1000, lam = 0.98, ci = 0.95){ alpha <- 1-ci
n <- length(r)
df <- data.frame(age = seq(n,1), r, loss = -P*r) df$w <- with(df,((1-lam)*lam^(age-1))/(1-lam^n))
df <- df[order(df$loss, decreasing = TRUE),] df$cumw <- with(df, cumsum(w))
VaR <- df[which.min(abs(df$cumw - alpha)),'loss'] df2 <- df[df$loss > VaR,]
df2$nw <- with(df2, w/sum(w))
ES <- with(df2, sum(nw * loss))
res.vec <- c(VaR = VaR, ES = ES)
names(res.vec) <- paste0(names(res.vec),100*ci) return(res.vec)
} HS_aw(ft)
## VaR95 ES95
## 18.48 25.03
Volatility Adjusted Loss Distribution
Hull and White (1998) suggested approach.
Volatility Adjusted Loss Distribution
Hull and White (1998) suggested approach.
Problem - HS generated risk measure estimates may be
distorted if returns have non-constant volatility.
Volatility Adjusted Loss Distribution
Hull and White (1998) suggested approach.
Problem - HS generated risk measure estimates may be
distorted if returns have non-constant volatility.
We would like Returns to be adjusted to have constant
volatility, and
Volatility Adjusted Loss Distribution
Hull and White (1998) suggested approach.
Problem - HS generated risk measure estimates may be
distorted if returns have non-constant volatility.
We would like Returns to be adjusted to have constant
volatility, and
The volatility should reflect today’s trading environment.
How?
Volatility Adjusted Loss Distribution
Hull and White (1998) suggested approach.
Problem - HS generated risk measure estimates may be
distorted if returns have non-constant volatility.
We would like Returns to be adjusted to have constant
volatility, and
The volatility should reflect today’s trading environment.
How?
Use GARCH to transform the returns so they possess
today’s volatility, then map to losses and calculate risk measures via Historical Simulation.