matlab金融代写 MSF 525 Class Project

MSF 525 Class Project.
This document provides information on the class project that you can choose to carry out.

The project deadlines listed below are meant to ensure that you don’t wait until the last week to begin. What you should submit:

Your project should consist of a report including the following:

  1. A brief introduction, summarizing the problem that you are attempting.
  2. A section on background, discussing
    1. The valuation model.
    2. Equations you’ll need for pricing.
    3. Information on the practical relevance of your work. For instance, if

      you are developing a model with skew, explain the significance of

      skew in the market.

  3. A section on calibration – how did you determine model parameters?
  4. A section including the code that you wrote, with appropriate comments

    (which should be embedded in the code).

  5. Results, with tables and plots.
  6. A brief conclusion.
  7. A bibliography, with references that you have used.

Excluding plots (and perhaps tables), which can greatly expand the length of any paper, a reasonable length for this type of report is 7-10 pages. Please write clearly, define all variables in your formulas, and explain the origin of all parameters that you use in your models.

Many students, in undertaking these projects in the past, have waited to just before the due date to finish coding, and have found (surprise!) that their code does not behave reasonably. Typically anything complex that you code will not work on the first try. If you would like to obtain correct results, I strongly advise you to begin the coding fairly early, so that you have plenty of time to troubleshoot your code before the due date. Do not spend lots of time writing up a very polished report before you’ve done your coding.

Keep in mind that your grade will largely depend on your demonstration of how much you have learned and understood in doing the project, rather than simply whether your implementation and calculations were entirely successful. If the deadline approaches, and you believe your model is not giving entirely correct answers, then focus on producing a thoughtfully written report that clearly lays out what you have done and learned on this project.

Deadlines:

April 23, 2017: 9 a.m. Submit a short progress report (perhaps a list of points) summarizing the work that you’ve done on the project. It should only take a few minutes to write this summary. I’m not going to assign a grade to the report.

April 30, 2017: 9 a.m. All completed projects must be submitted by 9 a.m. on April 30, 2017, via Blackboard. You can also submit code and other supporting documents.

Late Projects: Please submit your projects on time. If they are late, your grade will be adjusted (down). Projects that are less than 24 hours late will be marked down by 30%. After that 30% is deducted for each additional day late.

Other Important Guidelines:

The project should be done independently, and not collaboratively. In particular, you should write your own code, and not borrow or share in the development of code with other students. Please do not take another (current or former) student’s code and modify it superficially (by changing variable names, for instance) to make it seem like your own. As the coding required for this project will be more extensive than that required for homework problems, it will be easy to detect attempts to borrow code and then apply superficial revisions. Students that use code from other current or former students will receive failing grades on the project.

If you use material from other sources in your project, please reference it carefully. Do not lift passages from the work of others verbatim. The penalty for plagiarism on the final project is failure of the course.

Remember, though, that this is your project (not mine or the TA’s). Therefore, we cannot help students debug their code. You should also test your code to see if it is correct. Think of special cases in which you can compute or estimate the answer, and see if your code reproduces what you expect.

Sample Project:
I recommend coding projects in Matlab.

Use the q-model to implement an interest rate model with skew that differs from normal skew. You should implement a q-version of Hull-White. Show how you would calibrate this model to a skewed volatility surface and price swaptions using this model.

This project can be broken up into several stages:

  1. Implement the q-model.
  2. Price caps/floors in the q-model.
  3. Calibrate the model to ATM cap/floor prices and illustrate how the model exhibits

    skew.

  4. Find the q-value that best fits both in-the-money and out-of-the-money capfloor

    prices.

  5. Improve your pricing model to accommodate finer timesteps.
  6. Price swaptions in the model.

You may find that you don’t have sufficient time to finish all 6 stages (particularly stages 5 and 6). It is best to make sure that your work is correct before moving onto the next stage.

Background: The q-model is constructed by building a diffusive (e.g. lattice) model in a variable x and then defining the short term rate to equal r(x)  1 (exp(qx) 1) . You

q

could begin with either Ho-Lee, BDT or Hull-White as your diffusive lattice model and then apply this mapping1. Applying this mapping will lead to an interest rate model that is neither normal nor lognormal, with distributional characteristics depending on the value of q. You can take this q-model, and price caps or swaptions with a variety of strikes (in, at and out-of-the money), and assess how the implied volatility varies with strike. This is done by calibrating your model to at-the-money options, and then pricing caps/floors/swaptions that are not at-the-money using the model. The ‘implied volatility’ predicted by the model is determined by figuring out which implied volatility you need to plug into Black’s formula to reproduce the option price.

In this project, you should use the Hull-White lattice as your underlying diffusive model. As q approaches zero, the q-mapping is just the identity, and we are left with the original unmapped model. For large q, the exponential term in the numerator of the mapping dominates over the minus 1. In this case, if we start with a normal model (Hull-White, and apply the q-mapping for large q), as the mapping becomes primarily exponential, the mapped model should become increasingly lognormal in character – it will approach a Black-Karasinski type model.

You can also choose negative values of q. This will lead to a model with a skew profile that is even steeper than the profile consistent with the Hull-White model.

As you start to implement the q-model, you’ll have to figure out how to calibrate the drift to reproduce bond prices. A very helpful reference is John Hull’s book, in the section on “Extension to Other Models”, after he discusses the Hull-White model. (In the 6th edition, see p. 667.). Hull works out how to do the calibration for a more generic transformation function. In Hull-White, you can determine the drift from an analytic formula. In these extensions (including the q-model), you’ll have to solve for the drift using some sort of one-dimensional equation solving technique, such as Newton- Raphson. (You could also use functions in Matlab, such as fzero.)

Here are a few other comments:

1 In your homework, you effectively applied the mapping r(x) = exp(x) to x built on a Ho-Lee lattice to build the Rendelman-Barrter model. So this process is just an extension of the same idea, with a more complex mapping function.

  1. You’ll need to set the mean reversion speed parameter for Hull-White. It is sufficient to just assume a reasonable value for this variable (.2 or so), similar to what we’ve used in class, or in exercises.
  2. I recommend sticking with Hull-White with time-independent volatility instead of using the model with time-varying volatility. The model with time-varying volatility is more complex to implement, and you might not have sufficient time to do this.
  3. Given the above point, you’ll only be able to calibrate exactly to cap/floors of one maturity at a time. You could therefore first adjust the volatility input σ to reproduce the ATM (‘at-the-money’) cap price, given a particular value of q.
  4. On blackboard, I’ve provided implied volatilities for caps and floors as a function of strike. You’ll need to convert these to prices using Black’s formula to perform the above calibration. Note that I’ve provided older market data. The markets after the crisis experienced some changes and are not really the most representative context in which to test out a model.
  5. The definition of ATM (‘at-the-money’) for caps is that the ATM strike equals the par swap rate for a swap with maturity equal to the cap/floor maturity. For instance, a 5-year cap will have an ATM strike equal to the 5-year par swap rate.
  6. The list of capfloor volatilities doesn’t specify whether the instrument is a cap or a floor – but this doesn’t matter, since they are the same, via put-call parity. However, you should restrict yourself to pricing and calibrating options that are out-of-the-money, at-the-money, or perhaps slightly in-the-money. Exclude deep in-the-money options (e.g. caps with strikes of 2%).
  7. I’ve given a list of cash LIBOR rates and swap rates on the spreadsheet on Blackboard, so you should be able to reproduce the forward curve (and any appropriate interest rate) from these.
  8. You should calibrate to the ATM options, and then price out of the money options, comparing your prices to the actual ones from the market data supplied on blackboard. You can approach this as an optimization problem in which you find the value of q that minimizes the sums of squares of differences between market and model prices, given that you’ve selected the value σ to match the ATM price. This can be done in Matlab using the optimization toolkit, for instance. A less sophisticated approach (if you don’t have sufficient time to learn how to perform optimizations) would entail simply guessing values of q until you found the ones that give the best fit.
  9. The simplest objective function is a sum of squared differences in prices. One can also weight the contribution from different instruments. In practice, it is often preferable to use the sum of squared differences of implied volatilities. This can be approximated by using weights proportional to (1/vega)^2 in the sum over squared price differences. This is discussed in Lecture Notes 3. It is fine, as it

is easiest, to just use the sum of squared differences of prices, but if

you have sufficient time, you might explore other weighting schemes.

  1. An alternative to the procedure described in steps 3 and 7 is to select

    capfloors of varying maturities and strikes, and then simultaneously find the optimal values of (q, σ) that produce a best fit to their prices. It will be much more difficult to get satisfactory results if you proceed this way, as the numerical optimization problem is much more difficult.

  2. You may find that if you’ve completed stage 4 of the project, that your results are unsatisfactory. One reason that this is the case is that the timesteps on the lattice are too coarse. This can be rectified by using finer timesteps. Implementing this is a significant amount of work. When timesteps are less than .25, the rate on each node represents a rate with shorter maturity than 3 month LIBOR. Below are a few comments on how to determine 3 month LIBOR on nodes when implementing finer timesteps.

    To do this, you could use backwards recursion as we did with Ho Lee (when we found the 3-year bond price on each node). Let’s say you want to price a 3-year cap. You’d start at year 3 of the lattice, put in the bond payoff (all ones), and work backwards on the lattice, using backwards recursion, to year 2.75. This would give you the price of the bond maturing at T=3 on each of the 2.75 year nodes. From this, you could compute 3-month LIBOR on each of the nodes. Then you would put all ones at the 2.75 nodes, and use backwards recursion to work back to year 2.5. This would tell you the price of the bond maturing at T = 2.75 on each of the 2.5 year nodes, and therefore you can get 3 month LIBOR on each of these nodes. You would continue this process back to T = .25.

    If you implement this, you can then price caps & floors correctly on a much finer lattice, and have a far more accurate skew profile. One caution: when you move to finer lattices and add backwards induction, this will take much more computer time. So you might want to limit yourself to short maturities when doing this, and if you use routines like fzero or minimization routines,

  3. If anyone wants to take this project even further, one possible extension is to compute greeks in the q-model, and assess how, given a portfolio of caps/swaptions, your hedging strategy would depend on the value of q.