程序代写代做代考 matlab Excel algorithm finance scheme FMO6

FMO6
FMO6
Lecture 11 – Improvements and Revision
Dr John Armstrong King’s College London August 3, 2016

FMO6
Improvements
Improving numerical methods
We’ll discuss
Richardson Extrapolation which can be used to improve many numerical methods
Four methods of improving Monte Carlo methods:
Antithetic sampling (which we’ve seen already) Importance sampling
Control variate method.
Quasi Monte Carlo

FMO6
Improvements
Richardson Extrapolation
Richardson Extrapolation
Suppose we have a numerical method to compute y∗ using some function y depending on a parameter ε.
y∗ = lim y(ε) ε→0
Suppose moreover we know that
y(ε) = y∗ + Cεn + o(εn+1)
for some constant C.
By comparing two estimates for y∗ and taking a linear combination we can get the εn term to cancel giving us a new method with faster convergence:
yk(ε) = kn(y(ε) − y(kε) kn −1
= y∗ + o(εn+1)

FMO6
Improvements
Richardson Extrapolation
Example
We wish to compute an integral by the trapezium rule. f :[a,b]−→R.
De􏺉ne ε = (b − a)/N where N is the number of steps. Compute estimate for the integral I1 using N steps,
ε1 =(b−a)/N.
Compute estimate for the integral I2 using 2N steps, ε2 = 1ε1. 2
So take k = ε2/ε1 = 1 in Richardson method. 2
Trapezium rule converges at rate n = 2. New estimate is:
(12I −I) 1 4 I=21 2=−I+I
R212 1−133
2

FMO6
Improvements
Richardson Extrapolation
MATLAB implementation of Richardson Extrapolation
% Perform intergration by the trapezium rule then apply richardson
% extrapolation to obtain estimates with improved convergence
function richardsonEstimate = integrateRichardson( f, a, b, N )
h1 = (b-a)/N;
h2 = (b-a)/(2*N);
estimate1 = integrateTrapezium( f,a,b,N);
estimate2 = integrateTrapezium( f,a,b,2*N);
k = h2/h1;
n = 2;
richardsonEstimate = (k^n*estimate1 – estimate2)/(k^n-1);
end

FMO6
Improvements
Richardson Extrapolation
Interpretation
IR = −1I1 + 4I2 33
I2 is computed using trapezium rule with 2N steps
I2 = h(f(x0)+2f(x1)+2f(x2)+…+2f(xN−2)+2f(xN−1)+f(xN))
2
I1 is computed using N steps
I1 = h(f(x0) +2f(x2)+…+2f(xN−2) +f(xN)) 2
So IR is given by:
IR = h(f(x0)+4f(x1)+2f(x2)+…+2f(xN−2)+4f(xN−1)+f(xN))
3
So Richardson extrapolation in this example is equivalent to Simpson’s rule.

FMO6
Improvements
Richardson Extrapolation
Importance of Richardson’s rule
Richardson’s rule can be applied to many numerical methods e.g. integration methods and 􏺉nite di􏺈erence methods.
You can generalize it to cancel higher orders, or simply apply it more than once.
Unfortunately it cannot be used for Monte Carlo pricing as the 1
error term is not a constant multiple of ε2

FMO6
Improvements
Antithetic Sampling
Revision: Antithetic Sampling
Suppose we have a Monte Carlo pricer based on drawing n normally distributed random numbers εi
It is often better to compute the price using a sample based on εi and −εi rather than to use a sequence of 2n independent random variables.
Theory: If X1 and X2 are random variables with
E(X1) = E(X2) then
E(X1) = E(X1 + X2 ) 2
But
Var(X1 +X2)= 1(Var(X1)+Var(X2)+2Cov(X1,X2)) 24
Let X1 be estimate based on the n variables εi . Let X2 be estimate based on −εi . We will often have Cov(X1, X2) is negative.

FMO6
Improvements
Antithetic Sampling
MATLAB implementation of Antithetic sampling
% Price a call option by antithetic sampling
function [price,errorEstimate] = callAntithetic( K,T, … S0,r,sigma, …
nPaths )
logS0 = log(S0);
epsilon1 = randn( nPaths/2,1 );
epsilon2 = -epsilon1;
logST1 = logS0 + (r-0.5*sigma^2)*T + sigma*sqrt(T)*epsilon1;
logST2 = logS0 + (r-0.5*sigma^2)*T + sigma*sqrt(T)*epsilon2;
ST1 = exp( logST1 );
ST2 = exp( logST2 );
discountedPayoffs1 = exp(-r*T)*max(ST1-K,0);
discountedPayoffs2 = exp(-r*T)*max(ST2-K,0);
price = mean(0.5*(discountedPayoffs1+discountedPayoffs2));
errorEstimate = std(0.5*(discountedPayoffs1+discountedPayoffs2))/sqrt(nPaths/2); end

FMO6
Improvements
Antithetic Sampling
Antithetic Sampling Results
Parameters: S0 = 100, K = 100, σ = 0.2, r = 0.14, T = 1, N = 10000
Results:
Method Black􏺋Scholes Formula Naive Monte Carlo Antithetic Sampling
Price 3.0679 3.0794 3.0771
Standard error estimate
0.197 0.054
Conclusion: Antithetic sampling is easy to implement and often rather e􏺈ective.

FMO6
Improvements
Importance Sampling
Importance Sampling
Monte Carlo pricing is an integration method.
You can use substitution to change one integral to another integral by re-parameterizing
Equivalently you can change the distribution from which you draw your samples so long as apply appropriate weights to correct for this.
Monte Carlo integration is exact when the price function is constant
If we can re-parameterize so the price function is nearer to being constant, we will have reduced the variance of the Monte Carlo algorithm.

FMO6
Improvements
Importance Sampling
Importance Sampling Example
Suppose we want to price a far out of the money knock out call option
Suppose that for 99% of price paths the option will end out of the money
This means that 99% of price paths in the Monte Carlo calculation will give us no information.
Instead: 􏺉nd a way to generate the 1% of price paths where the option ends up in the money; compute the expectation for these paths; re-weight by multiplying by 100.
For simplicity, let’s do this for a vanilla call option to see how it improves upon ordinary Monte Carlo.

FMO6
Improvements
Importance Sampling
Calculation
Generate stocks prices at time T using the formula: 􏰔1􏰕√
log(ST)=log(S0)+ r− σ2 T+σ TN−1(u) 2
where u is uniformly distributed on [0, 1].
Option is in the money only if log(ST ) ≥ log(K ). Equivalently only if:
􏰔log(K)−log(S )−(r −(1/2)σ2)∗T􏰕 u≥umin:=N 0 √
σT
So only generate values u on the interval [umin,1], then multiply resulting expectation by 1 − umin to account for the fact that we have only generated 1 − umin of the possible samples.
We know the other samples would have given 0 for the option payo􏺈.

FMO6
Improvements
Importance Sampling
MATLAB implementation of Importance Sampling
% Price a call option by importance sampling
% (Obviously this is somewhat unrealistic since we can already % price call options anlaytically, but importance sampling can % be applied to more interesting options)
function [price,error] = callImportance( K,T, …
S0,r,sigma, … nPaths )
logS0 = log(S0);
% Generate random numbers u on the interval [lowestU,1]
lowestU = normcdf( (log(K)-logS0 – (r-0.5*sigma^2)*T)/(sigma*sqrt(T)) ); u = rand(nPaths,1)*(1-lowestU)+lowestU;
% Now generate stock paths using norminv( u ). lowestU was chosen
% so that the lowest possible stock price obtained is K. Note that
% we are only considering a certain proportion of possible stock prices logST = logS0 + (r-0.5*sigma^2)*T + sigma*sqrt(T)*norminv(u);
ST = exp( logST );
discountedPayoff = exp(-r*T)*(ST-K);
% Since we only simulate a certain proportion of prices, the true
% epectation of the final option value must be weighted by proportion proportion = 1-lowestU;
price = mean(discountedPayoff)*proportion;
error = std(discountedPayoff)*proportion/sqrt(nPaths);
end

FMO6
Improvements
Importance Sampling
Importance Sampling Results
Parameters: S0 = 100, K = 200, σ = 0.2, r = 0.14, T = 1, n = 1000.
Note that this is far out of the money, so naive Monte Carlo will perform badly.
Results:
Method Black􏺋Scholes Formula Naive Monte Carlo Importance Sampling
Price 0.02241 0.05960 0.02122
Standard Error
0.03469 0.00066
Conclusions: Importance Sampling is more di􏺍cult to implement than antithetic sampling, but can produce excellent improvement for far out of the money options

FMO6
Improvements
Control Variate Method
The Control Variate Method – Idea
Suppose that we wish to price a Knock Out option
We have an analytic formula for the price of a Call Option with the same strike.
Maybe, rather than pricing a Knock Out option directly, it would be a better idea to estimate the di􏺈erence between the price of a Knock Out option and the price of the Call Option using Monte Carlo instead.
Price of Knockout Option ≈ Price of Call Option
+ Estimate of di􏺈erence (1)
Because the di􏺈erence is probably smaller than the price we’re trying to estimate, the variability in a Monte Carlo estimate of the di􏺈erence is probably lower than the variablility in a Monte Carlo estimate of the price.

FMO6
Improvements
Control Variate Method
Control Variate – Example that proves it can work
Consider the extreme case of pricing a knock out option where the barrier is so high it will very rarely be hit.
In the control variate method, we will estimate that the di􏺈erence between the call price and the option price is zero even if we use a tiny sample (e.g. a sample of one).
The control variate method will converge to the exact answer immediately.
The naive method will be no more accurate than pricing a call by Monte Carlo, so only converges slowly.

FMO6
Improvements
Control Variate Method
The Control Variate method
Suppose we have a random variable M with E(M) = μ and wish to 􏺉nd μ.
Suppose we have another random variable T with E (T ) = τ with τ known.
De􏺉neM∗ =M+c(T−τ). E(M∗)=μtooforanyc∈R. Our previous example was the special case when c = −1. Var(M∗) = Var(M) + c2 Var(T) + 2c Cov(M,T)
Choose c to minimize this
c = −Cov(M,T) Var(T,T)
Var(M∗) = (1 − ρ2) Var(M) where ρ is the correlation between M and T.

FMO6
Improvements
Control Variate Method
Control Variate method, worked example
Let us price a Call Option by Monte Carlo
We expect the price of a Call Option to be correlated with the price of the stock, so let’s use the stock price as our control variate.

FMO6
Improvements
Control Variate Method
function [price,errorEstimate] = callControlVariate( K,T, …
S0,r,sigma, …
nPaths )
% Usual pricing code
logS0 = log(S0);
epsilon = randn( nPaths,1 );
logST = logS0 + (r-0.5*sigma^2)*T + sigma*sqrt(T)*epsilon;
ST = exp( logST );
discountedPayoffs = exp(-r*T)*max(ST-K,0);
% Standard formula for control variate method
m = discountedPayoffs;
t = exp(-r*T)*ST;
tau = S0;
covMatrix = cov(m,t);
c = -covMatrix(1,2)/covMatrix(2,2);
mStar = m + c*(t-tau);
% Result
price = mean(mStar);
errorEstimate = std(mStar)/sqrt(nPaths);
end

FMO6
Improvements
Control Variate Method
Control Variate Results
Parameters: S0 = 100, K = 100, σ = 0.2, r = 0.14, T = 1, n = 1000.
Results:
Method Black􏺋Scholes Formula Naive Monte Carlo Control variate
Result Standard Error 15.721
16.263 0.564 15.723 0.142
Conclusions: The control variate technique is easy to implement. It can produce signi􏺉cant improvements in the Monte Carlo price.

FMO6
Improvements
Low Discrepancy Sequences
Low Discrepancy Sequences
In one dimension, evenly distributed sample points give better performance than Monte Carlo sampling (i.e. the rectangle rule is faster than Monte Carlo).
In high dimensions there are better sets of sample points available.
Given an N and a dimension d, you can generate a 􏺆low discrepancy sequence􏺊 of N points in [0, 1]d so that if you wish to estimate a function f : [0, 1]d −→ R you will get a better estimate by sampling at the points in the low discrepancy sequence than you would be Monte Carlo.
Using a low discrepancy sequence rather than pseudo-random numbers is called quasi-Monte Carlo. Low discrepancy sequences are also called quasi-random numbers.
Quasi Monte Carlo converges faster than Monte Carlo 􏺌 but you have to be careful: you can only guarantee better results as N tends to in􏺉nity and you want good results for low N.
See Joshi 􏺆More Mathematical Finance􏺊 (or many other sources) for details on how to use low discrepancy sequences in practice.

FMO6
Improvements
Low Discrepancy Sequences
Generating Low discrepancy sequencey in MATLAB
d = 2;
s = sobolset(d); points = net(s,1000);
You can replace sobolset with haltonset for another low discrepancy sequence.
Try plotting a scatter plot of the results and comparing with the halton version and genuine random numbers.
Aside: this is an example of object oriented MATLAB code. We are creating a complex data object and then calling special functions to work with that kind of data object

FMO6
Improvements
Summary
Summary of improvements to numerical methods
With little e􏺈ort you can use techniques such as Richardson extrapolation to improve the convergence of a numerical method.
There are various 􏺆variance reduction􏺊 techniques available for Monte Carlo. If you need to speed up your Monte Carlo pricer why not try all of them at once?

FMO6
Improvements
Summary
Exercise 1
A stock follows the Black Scholes model with S0 = 1, σ = 0.2 and μ = 0.08. The risk free rate is 0.05. An investor has 1 dollar to invest for a time period of 1 year and wishes to optimize their expected utility. Their utility function is
􏰎ln(x) x>0
u(x) =
−∞ x≤0
Compute their expected utility:
By the Monte Carlo method
By the Monte Carlo method with antithetic sampling
By the Monte Carlo method with a control variate of your choice
Using a low discrepancy sequence.
Use the rectangle rule.
Compare the errors of these approaches

FMO6
Improvements
Summary
Exercise 2
Recall that you can compute the area of the unit circle using a Monte Carlo method. Simply generate uniformly distributed points in [−1, 1] × [1, 1] and count how many lie in the circle. Implement this in MATLAB.
Which is better using 2N uniformly distributed points in
[−1, 1] × [−1, 1] or using 2N points generated using antithetic sampling? Explain your answer.

FMO6
Improvements
Summary
Exercise 3
Use Richardson extrapolation to improve the estimator
f ′(x) ≈ f (x + h) − f (x) h
Use Richardson extrapolation to 􏺉nd an even better estimator for the derivative of a function.
Generate log-log plots to illustrate your solution.

FMO6
Numerical Integration
Question: Numerical intergration
How many integration rules can you name?
What are their formulae and rate of convergence?
Can you draw a log-log plot indicating their convergence?

FMO6
Numerical Integration
Revision: Numerical integration
h = (b-a)/n; Rectangle rule:
􏶩b a
xi =a+ih (f(x0)+2f(x1)+2f(x2)+…2f(xn−1)+f(xn))
Error o(h2). Trapezium rule:
􏶩bh f =
2 Error o(h2).
􏰔 1􏰕 xi=a+i− h
2
f = h(f(x1)+f(x2)+…+f(xn))
a
Simpson’s rule. n is even:
xi =a+ih (f(x0)+4f(x1)+2f(x2)+4f(x3)+2f(x4)+…4f(xn−1)+f(xn))
3 Error o(h4).
􏶩bh f =
a

FMO6
Numerical Integration
Plot of errors for integration rules
0 10
−2 10
−4 10
−6 10
−8 10
−10 10
−12 10
−14 10
−16 10
Errors in numerical integration
Rectangle rule
Trapezium rule Simpsons rule Monte Carlo
0123456 10 10 10 10 10 10 10
Error
Number of points

FMO6
Numerical Integration
Question: Numerical intergration
What has numerical integration got to do with pricing derivatives?
What has numerical integration got to do with Monte Carlo pricing?

FMO6
Numerical Integration
Revision: Relevance of numerical integration
Risk neutral prices are just expectations in the risk neutral measure
If we qT(S) is the probability density of ST in the Q measure,
−rT 􏶩 R
for options whose payo􏺈 only depends on ST . qT (S ) is the pricing kernel.
You can perform 1 dimensional intergrals by Monte Carlo.
Chose random points x on the interval [a, b]
Compute the average of f (x) and multiply by (b − a). 1
Convergence o(h2 ).
For high dimensional integrals, Monte Carlo is the only practical choice because minimum number of sample points required for other integration rules grows exponentially with dimension.
price = e
qT (S )payo􏺈(S ) dS

FMO6
Simulation
Euler Method
Question: Simulating random variables
What is the recipe for converting a stochastic di􏺈erential equation into its Euler approximation?

FMO6
Simulation
Euler Method
Revision: Simulating random variables
To simulate a stochastic di􏺈erential equation, 􏺉rst write down its Euler approximation:
Replace dt with δt.
Replace dXt with δX.
ReplacedW(i) withδW(i). tt
Example:
becomes:
􏰔1􏰕
2
ds= μ− σ dt+σdWt
2 􏰔1􏰕
si+1 =si + μ− σ2 δt+σδWt 2
√ (i)
tεt with standard normally distributed ε.

The Euler scheme is exact for Brownian motion. When simulating the log of the stock price, you can use just one time step if desired.
Now replace δWt with
ds= μ− σ dt+σ δtεt
􏰔1􏰕 2
2

FMO6
Simulation
Euler Method
Revision: generateBSPaths
% Generate random price paths according to the black scholes model
% from time 0 to time T. There should be nSteps in the path and
% nPaths different paths
function [ S, times ] = generateBSPaths( …
T, S0, mu, sigma,nPaths, nSteps )
dt = T/nSteps;
logS0 = log( S0);
W = randn( nPaths, nSteps );
dlogS = (mu-0.5*sigma^2)*dt + sigma*sqrt(dt)*W;
logS = logS0 + cumsum( dlogS, 2);
S = exp(logS);
times = dt:dt:T;
end

FMO6
Simulation
Cholesky Decomposition
Revision: Correlated random variables
A pseudo square root of a positive de􏺉nite symmetric matrix A is a matrix B with BBT.
The Cholesky decomposition of A is the unique lower triangular pseudo-square root, L, with positive diagonal. A = LLT .
If x is a vector of independent N(0,1) variables, then Lx is multivariate normal with mean 0 and covariance matrix A.

FMO6
Using simulations
Question: Simulations
What can we usefully do with our simulations?

FMO6
Using simulations
Revision: Using simulations
We can use our simulations for: Pricing
Testing strategies Optimization Risk management

FMO6
Using simulations
Monte Carlo Pricing
Question: Monte Carlo Pricing
How do you use Monte Carlo methods to price an option? Give one way of computing the delta by Monte Carlo?
What kinds of option can you / can’t you price by Monte Carlo?
How fast is Monte Carlo?

FMO6
Using simulations
Monte Carlo Pricing
Revision: Monte Carlo Pricing
Generate paths in the risk neutral measure
The discounted sample mean is an estimate for the price
The sample standard deviation divided by the square root of the number of paths is an estimate for the error
Use the same random numbers if estimating the delta by comparing two nearby Monte Carlo prices.
Seeding the random number generator is one way to do this.
You can price (discrete time) Knock Out, Knock In and Asian options.
You can price European path-independent options, though 1-d integration is faster.
You can’t price American options.
−1 Convergence is of order o(n 2 )

FMO6
Using simulations
Testing strategies
Question: Testing strategies
If a trader decides to write a call option and then delta hedge to ensure they can ful􏺉l their obligation what is their bank balance at each time?

FMO6
Using simulations
Testing strategies
Revision: Delta hedging
bt is bank balance at time t. P is amount charged.
b0 = P − ∆0S0
bt = erδtbt−1 − (∆t − ∆t−1)St
bn =erδtbn−1 +(∆n−1)Sn −max{Sn −K,0} Standard deviation of 􏺉nal bank balance is of order (δt)2 .
To simulate delta hedging simulate P-measure stock prices and then use the above equations to simulate the bank balance of a delta hedger.
1

FMO6
Using simulations
Testing strategies
Delta hedging results
Distribution of profits when delta hedging daily and charging BS Price
3500 3000 2500 2000 1500 1000
500
0
−1 −0.5 0 0.5 1

FMO6
Using simulations
Optimization
Question: Optimization
What is meant by a utility function? Give an example.
How can you 􏺉nd good strategies by combining other strategies?
What is the indi􏺈erence price?

FMO6
Using simulations
Optimization
Revision: Optimization
A utility function is a mapping from R to R that ascribes a subjective value to any particular payo􏺈. Utility is usually increasing and concave.
We wish to maximize expected utility.
Given n strategies S1, S2, . . . Sn we can form a linear combination 􏰍i αi Si . Generate M scenarios. Let pim be the payo􏺈 of strategy i in scenario m Use fmincon to minimize the expected disutility:
.
− 1 􏰃 u ( 􏰃 α i p im )
M
mi
The indi􏺈erence price for a 􏺉nancial product P and strategy S is the amount you could pay P so that your expected utility when following strategy S remains the same whether or not you buy P.
The indi􏺈erence price for P is the in􏺉mum of the indi􏺈erence prices over all strategies.

FMO6
Using simulations
Optimization
Revision: u(x) = 1 − e−λx
1.8 1.6 1.4 1.2
1 0.8 0.6 0.4 0.2 0 −0.2
0 0.5 1 1.5 2
How much of each strategy is optimal?
Lambda (risk aversion)
Delta hedge strategy
Stop loss strategy No hedge strategy Invest in stock
Quantity of strategy

FMO6
Risk Management
Question: Risk Management
Name three methods of computing/approximating V@R What’s good/bad about each?

FMO6
Risk Management
Revision: V@R
Monte Carlo VaR
Good Points: Accurate answers assuming model is correct Bad Points: Slow, choice of model is subjective
Parametric VaR
Good Points: Fast
Bad Points: Innaccurate for nonlinear products. Choice of model is subjective.
Historic VaR
Good Points: Not subjective
Bad Points: Limited by availability of data and claim that future is same as past.

FMO6
Risk Management
Revision: VaR implementation details
For Monte Carlo V@R and parameteric V@R you need to estimate model parameters
Choice of drift is not so important e.g. choose so that logS is driftless.
Take EWMA for volatility:
1 − λ 􏰃m
σ2 =365× λjr2
1−λm+1 i−j j=0
r contains daily log returns.
Use smaller λ for shorter time horizons.
For historic V@R use historic data to 􏺉nd series of historic
n to generate n-day returns.

daily log returns. Scale by Otherwise same as Monte Carlo.

FMO6
Risk Management
Revision: Parametric V@R
If Pa are the risk factors
p(a) = log(P(a))
Σ = cov(p) is the covariance matrix
d is number of days
V is the security we’re trying to calculate VaR for. De􏺉ne
δ(a)=P(a) ∂V ∂P(a)
Parametric V@R is
VaR≈N−1􏰒 p 􏰓􏶫dδTΣδ 100 365

FMO6
Risk Management
VaR versus CVaR etc.
Good: VaR is easy to understand, VaR estimates are easy to back test. Banks have already implemented VaR systems.
Bad: VaR is not sub-additive, hence not a coherent risk measure.
Good: CVar is a coherent risk measure. It is convex so good for optimizations.
Bad: CVar is hared to calculate than VaR. It is harder to test.
Bad for both: may lead to herd behaviour, false sense of security.

FMO6
Finite di􏺈erence methods
Question: Finite di􏺈erence methods
What can you price with 􏺉nite di􏺈erence methods?
What can’t you price with 􏺉nite di􏺈erence methods? (as taught in this course)

FMO6
Finite di􏺈erence methods
Revision: 􏺉nite di􏺈erence methods
Approximate PDE with 􏺉nite di􏺈erence method and work backwards in time from 􏺉nal payo􏺈 to current price. Precise method depends on choice of PDE and choice of stencil.
Black Scholes PDE
Vt + 1σ2S2VSS + rSVS − rV = 2
Negative time heat equation
Wt = −1σ2Wxx 2
W =e−rtV
x = −(r − 1σ2)t + log(S) 2

FMO6
Finite di􏺈erence methods
Revision: stencils
Forward di􏺈erence
Backward di􏺈erence
Central di􏺈erence
Second derivative
f′(x) ≈ f(x +h)−f(x) h
f′(x) ≈ f(x)−f(x −h) h
f′(x) ≈ f(x +h)−f(x −h) 2h
f′′(x) ≈ f(x +h)−2f(x)+f(x −h) h2
EXPLICIT IMPLICIT
CRANK- NICOLSON

FMO6
Finite di􏺈erence methods
Revision: Simplest case of 􏺉nite di􏺈erence
Take the heat equation and use the explicit scheme. Wi−1,j = λWi,j+1 + (1 − 2λ)Wi,j + λWi,j−1
λ=1σ2 δt 2 δx2
Only stable if (1 − 2λ) > 0. Unstable means that small changes in W due to rounding errors result in wildly changing values in W in earlier time steps.
Interpretation: only stable if we can see this as a trinomial tree with λ, (1 − 2λ) and λ being probability of moving up, staying same or moving down.

FMO6
Finite di􏺈erence methods
Revision: Boundary conditions
For a call option, on top boundary call option is well approximated by a portfolio of:
one stock (value at time t is St)
obligation to pay strike at (value at time t is e−r(T−t)K)
HenceV ≈St −e−r(T−t)K ontopboundary.
On bottom boundary, call option is well approximated by 0.
What are boundary conditions for put? What about when using the heat equation?

FMO6
Other models
Models other than Black Scholes
Heston model and jump di􏺈usion are two models we have seen Use Euler method to simulate Heston
Calibrate Q measure model to smile by using fmincon to minimize mean square distance.
You can hedge exotics by hedging using underlying and options.

FMO6
Summary
Summary
That’s not the whole course, but it’s a lot of it on only a few slides.
This is how you should revise: write super-condensed summary notes, then learn them.

FMO6
School’s Out!
School’s Out!