FMO6
FMO6 Lecture 9
Dr John Armstrong King’s College London August 3, 2016
FMO6
The implicit method
Estimating derivatives
There are many formulae for estimating derivatives numerically. For the rst derivative alone we have
Forward dierence
f ′(x) ≈ f (x + h) − f (x) h
Backward dierence
f ′(x) ≈ f (x) − f (x − h) h
Central dierence
f ′(x) ≈ f (x + h) − f (x − h) 2h
Higher order estimates, e.g.
f′(x)= −f(x+2h)+8f(x+h)−8f(x−h)+f(x−2h)+o(h4) 12h
FMO6
The implicit method
Graphical representation
FMO6
The implicit method
Remarks
The forward dierence and backward dierence are only accurate to rst order.
The central dierence is accurate to second order (essentially because the formula is exact for quadratics)
You can create schemes with arbitrary convergence if f is suciently smooth and you are willing to perform suciently many evaluations of f .
FMO6
The implicit method
Other nite dierence schemes
We wish to solve:
=−
with nal boundary condition given at time T and appropriate
boundary conditions for large and small W .
For the explicit method we took the backwards estimate for the time derivative (and used the simplest estimate for the second derivative term)
For the implicit method, take the forward estimate for the time derivative
For the Crank-Nicolson method use a central estimate.
∂W ∂t
σ2 ∂2W 2 ∂x2
FMO6
The implicit method
Explicit and implicit dierence equations
Recall we have discretized the t and x coordinates so Wi,j is the value of W at time point i and space point j. We have N time steps and M space steps.
Explicit method: take the backward dierence in time and the simplest estimate in x.
Wi,j − Wi−1,j = −σ2 Wi,j+1 − 2Wi,j + Wi,j−1 δt 2 δx2
Implicit method: take the forward dierence in time and the simplest estimate in x.
Wi+1,j − Wi,j = −σ2 Wi,j+1 − 2Wi,j + Wi,j−1 δt 2 δx2
FMO6
The implicit method
Stencils
These pictures are called stencils. They summarize how we use the values of W to estimate the various derivatives.
FMO6
The implicit method
Implicit method
To price a call option using the implicit method for the heat equation, we have the following conditions:
A dierence equation
Wi+1,j − Wi,j = −σ2 Wi,j+1 − 2Wi,j + Wi,j−1
δt 2 δx2 Boundary conditions:
2
Wi,jmin =0
W =e−1σ2ti+xjmax −erTK
i ,jmax
W =max{e−1σ2T+xj −erTK,0}
imax ,j 2
Note: we calculated the boundary conditions last week when we transformed the BlackScholes PDE to the heat equation
FMO6
The implicit method
Remarks
The explicit method gave us a formula for Wi−1,j in terms of the value of W at time i.
The implicit method gives us M + 1 linear equations in the
M + 1 unknowns Wi−1,j in terms of the values of W at time i.
(Recall we have N time steps, M space steps and so N + 1 time points and M + 1 space points)
We can solve these linear equations to compute the values at time i − 1.
The method is called implicit because we don’t get an explicit formula for Wi,j, instead we calculated Wi,j as the value implied by a set of simultaneous equations.
FMO6
The implicit method
Solving the linear equations
To solve linear equations in MATLAB one writes them in matrix form Ax = b.
The solution is then given by x = A \ b. i.e. we divide both sides by A on the left.
Our dierence equation is
Wi+1,j − Wi,j = −σ2 Wi,j+1 − 2Wi,j + Wi,j−1
δt 2 δx2
Rewriting:
Wi+1,j = −λWi,j+1 + (1 + 2λ)Wi,j − λWi,j−1
where:
λ=1σ2 δt 2 (δx)2
FMO6
The implicit method
The simultaneous equations
For j ∈ {jmin+1,jmin+2,…,jmax−1 we have
−λWi,j+1 + (1 + 2λ)Wi,j − λWi,j−1 = Wi+1,j Boundary conditions
Wi,jmin = bottomi = 0
W =top =e−1σ2ti+xjmax −erT
i,jmax i 2
This gives a total of M + 1 linear equations in M unknowns.
FMO6
The implicit method
Matrix form
10 0 00…0 −λ1+2λ−λ00…0
0
0
.
.
. 0
0
0
0
−λ 1+2λ −λ 0… 0
0 0 0
0 Wi,j+1 min
Wi+1,j+1 min
0
0 Wi,j+3 .. . Wi,jmin+4 = .
0 0 0 0 0 0 0 0
0 0…−λ 0
0 . Wi+1,j−3
−λ1+2λ−λ…0
0 Wi,j+2 min
Wi+1,j+2 min
0 0 0 0 0 0
… 1+2λ −λ
… −λ 1+2λ −λ Wi,jmax−1 … 0 0 1 Wi,jmax
Wi+1,j−2 max
We call this large tri-diagonal matrix A.
We write a MATLAB helper function createTridiagonal which creates a tridiagonal matrix given three vectors containing the three non-zero diagonals.
00Wi,j bottomi min
Wi+1,j+3 min min
0Wi,j−2
max
max
Wi+1,jmax−1 topi
FMO6
The implicit method
MATLAB implementation
First we initialize variables such as the vectors x and t precisely as we did for the explicit method.
x0 = log( S0 );
xMin = x0 – nSds*sqrt(T)*sigma;
xMax = x0 + nSds*sqrt(T)*sigma;
dt = T/N;
dx = (xMax-xMin)/M;
iMin = 1;
iMax = N+1;
jMin = 1;
jMax = M+1;
x = (xMin:dx:xMax)’;
t = (0:dt:T);
lambda = 0.5*sigma^2 * dt/(dx)^2;
FMO6
The implicit method
The changed code
currW=max(exp(-0.5*sigma^2*T + x) – exp(-(r*T))*K,0);
A = createTridiagonal( [0 ; -lambda*ones(M-1,1) ; 0], …
[1 ; (1+2*lambda)*ones(M-1,1) ; 1], …
[0 ; -lambda*ones(M-1,1) ; 0] );
bottom = zeros(N+1,1);
top=exp(-0.5*sigma^2 * t + x(jMax))- exp(-r*T)*K;
for i=iMax-1:-1:iMin
vector= [ bottom(i); currW((jMin+1):(jMax-1)); top(i) ];
currW= A \ vector;
end
price = currW(jMin+M/2);
currW stores the value of W at time point i, we do not need to store the entire matrix of values for W
Note that writing [a; b; c] concatenates matrices vertically Writing [a b c] concatenates matrices horizontally.
FMO6
The implicit method
Advantages of the implicit method
Suppose we x λ. Choosing δx then determines δt.
The implicit scheme is stable irrespective of λ
The explicit scheme is stable only if (1 − 2λ) > 0.
The error of the implicit scheme is o(δt) just as is the explicit scheme.
For the explicit scheme, for moderately δx you are forced to have a tiny value for δt to ensure stability.
For the implicit scheme we can choose δx and δt independently. So we can get good answers with a comparatively small number of time steps.
FMO6
The implicit method
Solving the linear equations
To implement the implicit scheme, we need to solve a linear equation
Aw = v
where A is a symmetric, tri-diagonal matrix.
If we wrote a general-purpose linear equation solver using Gaussian elimination this would not take advantage of the simple form.
Let us see how to solve the equations eciently
FMO6
The implicit method
Gaussian elimination by hand
A tridiagonal system of equations can be written: b1x1 + c1x2
a2x1 + b2x2 + a3x2
+ c4x5
Take b1 times equation [2] and subtract a2 times equation [1] x1.
This gives the new equation:
(b1b2 − c1a2)x2 + b1c2x3 = b1d2 − a2d1
This equation together with equations [3], [4], . . . gives a new tridiagonal system in x2, x3, . . . xn.
…
+ c2x3 + b3x3 + a4x3
+ c3x4 + b4x4
= d1 [1] = d2 [2] = d3 [3] = d4[4]
FMO6
The implicit method
Thomas algorithm
1 dimensional tridiagonal problems are trivial to solve.
x1 = d1/b1.
Assume for induction that we have developed the Thomas algorithm for problems of dimension n.
For dimension n + 1 use the previous slide to nd a tridiagonal system in x2, x3, …, xn
Solve this system by the Thomas algorithm (we can do so by induction)
Now use the equation
b1x1 + c1x2 = d1
to solve for x1.
Therefore we can solve a tridiagonal system of equations with only o(n) multiplication and addition operations.
A naive implementation of Gaussian elimination will take o(n3) steps.
FMO6
The implicit method
Getting MATLAB to use the Thomas algorithm
We’d like MATLAB to use the Thomas algorithm One option is to implement it ourselves
Another option is to use MATLAB’s built in support for the algorithm
MATLAB will automatically use the Thomas algorithm to solve Ax = b if it detects that A is tri-diagonal.
In general checking if an arbitrary matrix is tri-diagonal will take o(n2) steps so we need to give MATLAB a hint.
FMO6
The implicit method
Sparse matrices
A sparse matrix is a matrix where most of the entries are zero.
To store a sparse matrix it is more ecient to store a list of the rows and columns that are non-zero and the values at those rows and columns than to store a large block of memory most of which is zero.
In general, the linear algebra algorithms one should use for sparse matrices are very dierent from the ones one uses with full (i.e. non-sparse) matrices.
We can create a sparse matrix in MATLAB using the command sparse.
When you solve the problem Ax = b in MATLAB with A a sparse matrix, it will automatically check to see whether using the Thomas algorithm is the best approach.
FMO6
The implicit method
Creating a sparse matrix in MATLAB
Suppose that a matrix A has non zero entries ari ,ci where r1, r2, …rn and c1, c2, …cn are some sequences of indices.
Create a vector rows containing r1, r2, . . . , rn.
Create a vector columns containing c1, c2, …, cn.
Create a vector values containing ar1 ,c1 , ar2 ,c2 , . . . , arn ,cn . Create a spare matrix A using the command
A = sparse( rows, columns, values );
In general MATLAB tries to intelligently select the best available algorithm, therefore you should always use a sparse matrix to store matrices which are mostly zero so that MATLAB has a hint as to how to proceed.
FMO6
The implicit method
The createTridiagonal function
%CREATETRIDIAGONAL Create a sparse tri-diagonal matrix contianing
% the given upper, diagonal and lower entries.
% Each of these should be a vector of length N, the first entry of
% lower should be zero, the last entry of upper should be zero.
function A= createTridiagonal( lower, diagonal, upper )
N = length( diagonal );
rowsUpper = (1:N-1)’;
colsUpper = (2:N)’;
rowsDiagonal = (1:N)’;
colsDiagonal = (1:N)’;
rowsLower = (2:N)’;
colsLower = (1:N-1)’;
allRows = [rowsUpper ; rowsDiagonal ; rowsLower ];
allCols = [colsUpper ; colsDiagonal ; colsLower ];
allVals = [ upper(rowsUpper) ; diagonal ; lower(rowsLower)];
A = sparse( allRows, allCols, allVals );
end
FMO6
The implicit method
The solveTridiagonal function
function [ x ] = solveTridiagonal( a,b,c,d )
if (length(a)==1)
x = d(1)/(b(1));
else
nextB = b(2:end);
nextB(1) = b(1)*b(2)-c(1)*a(2);
nextC = c(2:end);
nextC(1) = b(1)*c(2);
nextD = d(2:end);
nextD(1) = b(1)*d(2)-d(1)*a(2);
xRemainder = solveTridiagonal(a(2:end),nextB,nextC,nextD);
x1 = (d(1)-c(1)*xRemainder(1))/b(1);
x = [x1 ; xRemainder];
end end
FMO6
The implicit method
Recursion
You can write functions in MATLAB that call themselves Writing functions in this way is called recursion
This gives an easy implementation of solveTridiagonal that matches are inductive denition.
Its not written as eciently as it could be because we keep creating new vectors unnecessarily
It isn’t hard to replace the recursion with a for loop if preferred to get a fully ecient implementation. There isn’t much point in running through the details since we can use sparse matrices to achieve the same result.
FMO6
The implicit method
The Crank-Nicolson method
For the Crank-Nicolson method one uses the stencil:
∂W ≈ Wi+1,j −Wi,j
∂t ∂2W ≈
δt
1 × Wi+1,j+1 − 2Wi+1,j + Wi+1,j−1 2 δx2
∂x2
+ 1 × Wi,j+1 − 2Wi,j + Wi,j−1 2 δx2
FMO6
The implicit method
Crank-Nicolson dierence equations
The Crank-Nicolson method uses the average of the estimates for the second derivative at times i and i + 1.
Just as for the implicit method, when we include boundary conditions, at each time i we will get a system of M + 1 equations in the M + 1 unknowns Wi , j in terms of the values ofW attimei+1.
For j not at the boundary.
λWi+1,j+1 + (1 − λ)Wi+1,j + λWi+1,j−1
22
= −λWi,j+1 + (1 + λ)Wi,j − λWi,j−1 22
Once again this is a tridiagonal system.
FMO6
The implicit method
Benets of Crank-Nicolson scheme
It is always stable irrespective of choice of λ Convergence is o(δt2).
FMO6
The implicit method
Pricing American options by the implicit method
One of the main selling points of the explicit nite dierence method is that we can use it to price American options.
We have just seen how the implicit and Crank-Nicolson methods can be used to improve the stability and convergence of nite dierence methods.
Can these techniques be applied to improve the pricing of American options?
FMO6
The implicit method
Recap
To price an American option A by the explicit method, one assumes we can compute the price at time i + 1.
We can then use the explicit method to compute the expected value of a new option A ̃i at time i that is not-exercisable at time i but can be exercises at any time from i + 1 onwards.
The price of the American option is then estimated as the maximum of the immediate exercise price and the price of option A ̃i.
We can now proceed to time i − 1.
Notice that this argument uses expectations and nancial logic: we haven’t actually derived it from the Black Scholes PDE. It is really a tree pricing algorithm rather than a PDE algorithm.
FMO6
The implicit method
What PDE does an American put option satisfy
An American option does not obey the Black Scholes PDE at times when early exercise is optimal. At these points it satises:
V=K−S
∂V σ2 ∂2V ∂V
+ S2 +rS −rV <0
∂t2∂S2 ∂S
At times when early exercise is not optimal, it obeys the Black Scholes PDE and also the condition
V>K−S
∂V σ2 ∂2V ∂V
+ S2 +rS −rV =0
∂t2∂S2 ∂S
FMO6
The implicit method
Boundary conditions
At the boundary between the two regions, V and the delta are both continuous
∂S
This is called a free boundary problem.
V =max{K−S,0} ∂V =−1
We haven’t proved that these dierential inequalities hold, but you can convince yourself using a no-arbitrage argument. Since they are dierential inequalities you will need to use continuous time stochastic calculus to prove things rigorously.
FMO6
The implicit method
Complimentarity problem
The following inequalities hold everywhere V−K+S≥0
∂V σ2 ∂2V ∂V −−S2 −rS+rV≥0
∂t2∂S2 ∂S
Moreover we must have equality for at least one condition.
The condition that x ≥ 0 and y ≥ 0 and one of x and y vanishes can be written as x ≥ 0, y ≥ 0 and xy = 0.
FMO6
The implicit method
Dierential inequalities for American options
So we have that at all times V−K+S≥0
∂V σ2 ∂2V ∂V −−S2 −rS+rV≥0
∂t2∂S2 ∂S
and
(V−K+S) +S2 +rS−rV=0
∂V σ2 ∂2V ∂V ∂t2∂S2 ∂S
We can now nd discrete approximations to these inequalities using our choice of stencil and attempt to solve associated discrete problems.
It then seems reasonable to hope that this will lead to a nite dierence scheme for pricing American options with convergence properties similar to those seen for European options.
FMO6
The implicit method
Moving to the heat equation
Dene W = e−rtV is the discounted price. x = −(r − σ2 )t + log(S) as usual.
2
Boundary conditions are exactly the same as for a pricing a European put by the heat equation:
Top boundary condition: W (t, xmax) = 0 Bottomboundarycondition: W(t,xmin)=e−rt(K−S(xmin)) Final boundary condition: W(t,xmax) = E(t,x).
FMO6
The implicit method
Transformed dierential inequalities
The equations transform to:
∂W σ2 ∂2W
+ (W −E(t,x))=0
∂t 2∂x2
∂W σ2 ∂2W −−≥0
∂t 2∂x2 W −E(t,x)≥0
Here E (t , x ) = e −rt max{K − S (x ), 0} is the discounted early exercise price.
FMO6
The implicit method
Discretize
Let’s discretize using the implicit scheme, but you could use Crank Nicolson too.
Wi+1,j − Wi,j σ2 Wi,j+1 − 2Wi,j + Wi,j−1 − −
(Wi,j−Ei,j) = 0
δt 2 δx2 Wi,j −Ei,j ≥0
δt 2 δx2
Wi+1,j − Wi,j σ2 Wi,j+1 − 2Wi,j + Wi,j−1 −−≥0
How on earth do you solve such a system of inequalities?
FMO6
The implicit method
Linear complimentarity problem
The linear complementarity problem is the problem of solving x.y = 0
x≥0
y≥0 Ax = b + y
For vectors v and w given a vector b ̃ and a matrix A ̃. We’ll assume that A is symmetric and positive denite.
It is called linear because the last condition is linear
It is called complementarity because x and y are complimentary vectors: for each index j either xj or yj is zero.
FMO6
The implicit method
Rewriting the problem
Fix a time i.
Dene a vector y (indexed by j) by:
yj = −Wi+1,j − λWi,j+1 + (1 + 2λ)Wi,j − λWi,j−1 σ2 δt
Where λ = 2δx2 as previously.
Dene a vector wi to be the vector of values of Wi,j. There is a tridiagonal symmetric matrix A such that yj =−wi+1+Awi.
We can rewrite our equations as follows: Dene ei to be the vector Ei,j
Dene x = wi − ei .
Dene b = wi+1 − Aei
y =Awi −wi+1 =Ax−wi+1 +Aei =Ax−b So y.x = 0, x ≥ 0, y ≥ 0, Ax = y + b.
FMO6
The implicit method
Remarks
The matrix A is the same tridiagonal matrix that occurred in the implicit method for European options.
The formulae I’ve explicitly written only apply for j away from the boundary as for European options, we have a 1 in the top left and the bottom right of A to account for the boundary conditions.
FMO6
The implicit method
How to solve the linear complimentarity problem
The short answer is lookup in the literature how this can be solved numerically
We’ll give a run-through of the ideas that lead to the standard numerical solution used to price American options:
Solving the equation Ax = b iteratively. Jacobi method
GaussSeidel method
Successive over relaxation (SOR))
Solving the linear complimentarity problem by SOR.
FMO6
The implicit method
Jacobi method
The Jacobi method is a numerical method for solving the equation Ax = b.
Let us write A = D + R
a11 a12 …a1n a11 0 … 0 0 a12 …a1n
a21 a22 …a2n 0 a22 … 0 a21 0 …a2n . . . = . . . + . . .
.. ... ... . an1 an2 …ann 0 0 …ann an1 an2 … 0
D diagonal part R remainder
FMO6
The implicit method
Idea
D is easy to invert because it is diagonal.
(D + R)x = b implies Dx = b − Rx which implies
x = D−1(b − Rx).
Pick an initial guess x0.
Dene sequence xn = D−1(b − Rxn−1).
If this converges to a limit it will satisfy the equation Ax = b.
FMO6
The implicit method
Recursion
Consider the sequence
x0
x1 = D−1b − D−1Rx0
x2 = D−1b − D−1RD−1b + D−1RD−1Rx0
x3 = D−1b − D−1RD−1b + D−1RD−1b − D−1RD−1RD−1Rx0 …
So long as the spectral radius of D−1R is less than 1, this will converge.
If the matrix is strictly diagonally dominant i.e.
|aii| > |aij| i ̸=j
the sequence will converge
FMO6
The implicit method
Applications
For sparse matrices we can perform the multiplication by D−1(R) reasonably quickly due to sparseness.
The convergence of contractions is rapid, so we will only need a few iterations to get a good estimate.
If we have a good guess for the initial value it will be more rapid still.
Thus for diagonally dominant sparse matrices where we have a good idea of the initial value the Jacobi method will perform well.
Example: for appropriate choices of λ the matrix in the implicit method for European options is diagonally dominant. We have a good rst guess for the price vector at time i, it is presumably close to the price vector at time i + 1.
FMO6
The implicit method
Gauss-Seidel method
The Jacobi method is a numerical method for solving the equation Ax = b.
Let us write A = L + U
a11a12…a1na11 0…00a12…a1n
a21 a22 …a2n a21 a22 … 0 0 0 …a2n . . . = . . . +. . .
.. ... ... . an1 an2 …ann an1 an2 …ann 0 0 … 0
L∗ lower triangular part
U strictly upper triangular part
FMO6
The implicit method
Algorithm
Again, L∗ is easy to invert because it is lower triangular. A solution to Ax = b satises x = L−1(b − Ux).
Pick initial guess x(0) and dene x(n) = L−1(b − Ux(n−1)) ∗
Use the fact that L is lower triangular to write down the following relationship:
x(n+1) = 1 bi −aijx(n+1) −aijx(n) iaiij j
ji
This formula contains x(n+1) terms on both sides, but only
terms for j < i on the right. So long as we proceed by calculating in the order i = 1, 2, . . . , n this will give an explicit formula for xi .
∗
j
FMO6
The implicit method
When does this converge
Gauss-Seidel converges:
If A is symmetric and positive denite If A is strictly diagonally dominant
(It may converge under other circumstances too)
FMO6
The implicit method
Successive over relaxation
Write A = L + D + U where L is strictly lower triangular, D is diagonal and U is upper triangular.
(D + ωL)x = ωb − [ωU + (ω − 1)D]x
ω is some choice of parameter called the "relaxation" factor. It is a mash-up of Jacobi method and Gauss-Seidel method. It converges if A is positive denite and 0 < ω < 2.
Hope is that for some ω > 1 convergence should speed up, we won’t discuss how to choose a good value of ω.
FMO6
The implicit method
General idea
If we have a recursive system xn+1 = f (xn)
System xn′ +1 = (1 − ω)xn′ + ωf (xn′ ) gives another process
which, if they both converge will have the same limit.
Low values of ω slow rate of change of xn (in limiting case
ω = 0, the sequence remains constant).
High values of ω increase rate of change, so may speed convergence (or may cause oscillations or convergence to breakdown if ω is too high).
FMO6
The implicit method
Explicit formulae for SOR
Because (D + ωL) is lower triangular we can use forward substitution to write down explicit formulae as for Guass-Seidel
x(n+1) =(1−ω)x(n) + ω bi −aijx(n+1) −aijx(n)
i iaiij j
ji
Note this formula ts the general pattern given on the previous
slide.
You can use this to solve the linear equations that occur when pricing European options using the implicit or Crank-Nicolson schemes.
FMO6
The implicit method
Solving the linear complimentarity problem
The linear complimentarity problem is to solve
Ax =b+y, x ≥0, y ≥0, x.y =0
for vectors x, and y.
Note that in the special case when we have a solution with
y = 0 this reduces to Ax = b and y = 0 everywhere.
For example when applied to pricing American options y = 0 is saying that the BlackScholes PDE is satised everywhere. The equations Ax = b are then just the equations that occur in pricing a European option.
Idea: perhaps if we take an iterative method for solving
Ax = b but at each stage we insist that x ≥ 0 we will get a solution to the linear complimentarity problem.
FMO6
The implicit method
Solving linear complimentarity by successive over-relaxation
Take an initial guess x0 Dene x(n) by:
(n+1) (n+1)ω (n+1) (n) xi =max (1−ω)xi +aii bi − j
So long as A is positive semi-denite and 0 < ω < 2 this converges. "The Solution of a Quadratic Programming Problem Using Systematic Overelaxation", C Cryer, 1971
I note that he calls it "Systematic Overrelaxation" while everyone else calls it "Successive Overrelaxation" so presumably everyone nds the terminology a little odd!
FMO6
The implicit method
Initialization
x0 = log( S0 );
xMin = x0 - nSds*sqrt(T)*sigma;
xMax = x0 + nSds*sqrt(T)*sigma;
dt = T/N;
dx = (xMax-xMin)/M;
iMin = 1;
iMax = N+1;
jMin = 1;
jMax = M+1;
x = (xMin:dx:xMax)';
t = (0:dt:T);
lambda = 0.5*sigma^2 * dt/(dx)^2;
FMO6
The implicit method
Boundary conditions
% Use boundary condition to create vector currW
currW=max(exp(-r*T)*K-exp(-0.5 *sigma^2 * T + x),0);
A = createTridiagonal( [0 ; -lambda*ones(M-1,1) ; 0], ...
[1 ; (1+2*lambda)*ones(M-1,1) ; 1], ...
[0 ; -lambda*ones(M-1,1) ; 0] );
bottom = exp(-r*T)*K- exp(-0.5*sigma^2 * t + x(jMin));
top=zeros(1,N+1);
FMO6
The implicit method
Iteration
exercised = zeros(N+1,M+1);
W = zeros(N+1, M+1);
W(iMax,:)=currW;
for i=iMax-1:-1:iMin
wIPlus1 = [ bottom(i); currW((jMin+1):(jMax-1)); top(i) ];
% e = immediate exercise value
e = max(exp(-r*t(i))*K-exp(-0.5 *sigma^2 * t(i) + x),0);
b = wIPlus1 - A*e;
omega = 1.5;
if (american)
currW= solveLCP(A, b, wIPlus1, omega, 10 ) + e;
else
currW = solveBySOR(A,wIPlus1,wIPlus1, omega, 10 );
end
W(i,:)=currW;
exercised(i,:)=currW<=(e);
end
price = currW(jMin+M/2);
FMO6
The implicit method
Results
100 80 60 40 20
0 0
40 30
Discounted Price
20
40
20 10
x
60 0
Time
FMO6
The implicit method
Exercises
8 Use the implicit scheme to price a digital call option.
8 Implement the Crank-Nicolson scheme and use it to price a call option
8 Plot a graph of the convergence of the Crank-Nicolson scheme against δt keeping λ xed.
8 Implement the Jacobi method and test that you can use it to solve an equation of the form Ax = b with A a positive denite, symmetric, tridiagonal matrix.
8 Repeat the previous exercise with the Gauss-Seidel method
FMO6
The implicit method
Exercises cont.
8 Repeat the previous exercise with Successive Over Relaxation (SOR)
8 What methods have we covered to price derivatives in this course? Draw a table showing which methods you can apply to each of the following types of derivative:
European call option European digital call option Asian option
Knockout option
American put option