next up previous contents
Next: The Results Up: thesis Previous: Stochastic Volatility   Contents

Subsections

The Algorithm

We present the Monte-Carlo algorithm to solve for the option prices when the stock price and the volatility are undergoing stochastic processes
\begin{align}
dS &= (\phi S + \sigma SW)dt\\
dV &= (\lambda + \mu V + \xi V^\alpha Q)dt
\end{align}
where $\phi,\, \lambda,\, \mu$ and $\xi$ are constants, $V = \sigma^2$ and $W$ and $Q$ are white noise processes whose correlation is given by $-1 \le \rho \le 1$. By the principle of risk-neutral valuation, $\phi$ cannot be part of the solution (as we must assume that the expected growth rate of all securities is given by the risk-free interest rate) and can be replaced by $r$.

We know that the solution for the propagator is given by (4.57). Hence, the option price is given by

\begin{displaymath}
f(x, y, t) = \int_{-\infty}^\infty dx' \matel{x,
y}{e^{-\hat{H}\tau}}{x'}f(x', T)
\end{displaymath} (119)

where $f(x', T)$ is the payoff at maturity of the derivative and is $\max(e^{x'}-K, 0)$ for an European call option. The aim of the Monte Carlo algorithm will be to calculate the propagator $\matel{x,
y}{e^{-\hat{H}\tau}}{x'}$. The integration over $f(x', T)$ can then be easily performed using elementary numerical techniques (we will see later that a simple technique such as Simpson's rule suffices as the error due to the numerical integration is negligible compared to the Monte Carlo error in the propagator).

A Short, Quick Reminder of the Monte Carlo Method

The Monte Carlo algorithm to integrate
\begin{displaymath}
\int_A f(X) dX
\end{displaymath} (120)

where $X$ can be (and, in fact, usually is) a multi-dimensional variable and $A$ is a subset of the domain of $X$ requires us to split $f(X) = g(X) p(X)$ so that $\int_A p(X)dX = 1$ (in other words, so that $p(X)$ is a valid probability density function). The algorithm then states that an estimate for the integral is given by $\langle
g(X_i) \rangle = \frac{1}{N} \sum_{i=1}^N g(X_i)$ where the configurations $X_i$ are generated randomly according to the probability density function $p(X)$.

The error of the Monte Carlo method goes as $\frac{1}{\sqrt{N}}$ as a consequence of the central limit theorem as long as there is no correlation between the configurations produced. (Though in general this condition is difficult to satisfy, we shall see later that we can easily satisfy it for this case). While this error may not look very impressive, it is often the best that can be managed for $X$ which have a large number of dimensions. For the present problem, we have a very large number of dimensions (in fact the exact problem has an infinite number of dimensions) and the Monte-Carlo method is the most practical one available for it.

Our Monte Carlo Method for this Problem

For this problem, we choose the following probability density function
\begin{displaymath}
p(Y) = \left( \prod_{i=1}^{N-1}
\frac{e^{y_i(1-\alpha)}}{\sqrt{2\pi\epsilon} \xi} \right) e^{S_0}
\end{displaymath} (121)

where $Y$ is the set of variables $y_i$ (and is hence $N$-dimensional) and $S_0$ is given in (4.59). Hence, we see that
\begin{displaymath}
g(Y) = \frac{e^{S_1}}{\sqrt{2\pi\epsilon (1-\rho^2) \sum_{i=1}^N
e^{y_i}}}
\end{displaymath} (122)

where $S_1$ is given in (4.50) since the integral we are performing is
\begin{displaymath}
\int DY \left( \prod_{i=1}^{N-1} e^{y_i(1-\alpha)} \right)
\...
..._0+S_1}}{\sqrt{2\pi \epsilon (1-\rho^2) \sum_{i=1}^N
e^{y_i}}}
\end{displaymath} (123)

where \begin{equation*}
DY = dy_0 \prod_{i=1}^{N-1} \frac{dy_i}{\sqrt{2\pi\epsilon}\xi}
\end{equation*}

We now have to produce configurations $Y$ with the probability distribution $p(Y)$. While $p(Y)$ looks rather complicated, it has a simple interpretation. It is the probability distribution for a discretised random walk performed by $y$. To see this, let us first use Ito's lemma to find the process followed by $y$. We find

\begin{displaymath}
dy = \left( \lambda e^{-y} + \mu - \frac{\xi^2 e^{2y(\alpha-1)}}{2}
\right)dt + \xi e^{y(\alpha-1)} Q dt
\end{displaymath} (124)

We can now discretise the process using Euler's method to obtain
\begin{displaymath}
\delta y_i = \left( \lambda e^{-y_i} + \mu - \frac{\xi^2
e^{...
...{2}\right) \epsilon + \xi e^{y_i(\alpha-1)} Z
\sqrt{\epsilon}
\end{displaymath} (125)

where $\delta y_i = y_{i} - y_{i-1}$, $\epsilon$ is the time step and $Z$ is a standard normal variable. Since we are using $\tau = T-t$ as the time variable, the time step is actually $-\epsilon$. Hence, $\delta y_i$ is a normal random variable with mean $-\lambda e^{-y_i}
- \mu + \xi^2e^{2y_i(\alpha-1)}/2$ and variance $\xi^2e^{2y_i(\alpha-1)}\epsilon$. Hence, its probability density function is given by
\begin{displaymath}
f_i = \frac{e^{y_i(1-\alpha)}}{\sqrt{2\pi\epsilon}\xi} \exp\left(
-\frac{\epsilon}{2\xi^2} \yexpri^2 \right)
\end{displaymath} (126)

Hence, the joint probability density function for the discretised process is given by
\begin{displaymath}
f = \prod_{i=1}^{N-1} f_i = \left( \prod_{i=1}^{N-1}
\frac{e^{y_i(1-\alpha)}}{\sqrt{2\pi\epsilon}\xi} \right) e^{S_0}
\end{displaymath} (127)

which is the same as (5.5).

The above result can also be derived by considering (4.6) as Brownian motion on a one-dimensional Riemmanian manifold with metric tensor $\frac{e^{2y(\alpha-1)}}{\xi^2}$.

Hence, in this simulation, we will use Euler's method to find the volatility paths since these paths are generated with the requisite probability distribution.

Hence, the algorithm to find a Monte Carlo estimate of the propagator $p = \matel{x, y}{e^{-\hat{H}\tau}}{x'}$ is as follows

  1. p := 0 (Initialization)
  2. For i := 1 to N
  3. Generate a path $Y$ for $y$ using (5.8)
  4. $p := p+g(Y)/N$ (where $g(Y)$ is defined in (5.6))
  5. End For
(This algorithm is an adaptation from Baaquie and Kwek[29] which was for the special case $\alpha = 1$.) The paths must be generated backwards starting from $y_N$ which is the initial value of $\ln V$ to obtain all the $y_i$ ending at $y_0$. Since the equations are time symmetric (after, of course, reversing the drift terms), this presents no problem. This will have to be repeated for all the $x'$ that we wish to integrate over. However, during implementation it is found to be more advantageous to generate the paths only once, storing the important terms $t_1 = \sum_{i=1}^N e^{y_i}$ and $t_2 =
\sum_{i=1}^N e^{y_i(3/2-\alpha)} \yexpri$ which are sufficient to determine $S_1$ once $x'$ is given ( $S_1 = -\frac{1}{2\epsilon
(1-\rho^2) t_1}\left( x-x' + (r-q)t -
\epsilon \left( \frac{t_1}{2}+t_2 \right) \right)^2$). That $S_1$ can be computed using this limited information is fortunate as storing all the paths explicitly would require a very large memory (10MB for 10,000 configurations as compared to 160kB when storing only the essential combinations of terms). The alternative of generating paths for each value of $x'$ (as in the naïve algorithm above) is unacceptable due to the very large run time required using this approach.

The propagator must finally be integrated over the final wave function to obtain the option price. The accuracy of this numerical quadrature depends on the spacing $h$ between successive values of $x'$. This means that we have to find the propagator for several values of $x'$ to obtain reasonable accuracy which is computationally very expensive. We found it better to find the propagator using the above Monte Carlo method for only about 100 equally spaced values of $x'$ over the range of the quadrature and using cubic splines to interpolate it at the other quadrature points. This produces excellent results (as we shall see later) as the propagator seems to be an extremely smooth function of $x'$.

Hence, the revised algorithm is of the form

  1. For i := 1 to N
  2. Generate a path $Y$ for $y$ using (5.8)
  3. Store $t_1$ and $t_2$ for the path
  4. End For
  5. For $x'$ := beginning of range to end of range
  6. Find the Monte Carlo estimate for the propagator at large intervals of $x'$ using $t_1$ and $t_2$ from the paths.
  7. End For
  8. For $x'$ := beginning of range to end of range
  9. Find the propagator at small intervals of $x'$ using cubic spline interpolation over the values of the propagator found previously and integrate over the final wave function (payoff)
  10. End For
  11. Return option price
The numerical quadrature was performed using Simpson's rule. While Simpson's rule is a relatively low order method, it was deemed sufficient as the error in quadrature is negligible compared to the Monte Carlo error in the propagator. For the special case of the European call option, it is possible to integrate analytically over the splines as the payoff is piecewise linear. However, this was not done as firstly, the error in the quadrature was negligible compared to the Monte Carlo error and secondly (and more importantly) we wanted to make the program general so that it is able to price any derivative based on the underlying security which has only one payoff date.

For the special case of functions which can be expressed as piecewise polynomial functions, the integration can be carried out analytically and we can then just generate the paths and find the expectation over the generalised Black-Scholes prices. While this method has the advantages of being elegant, easy to program and completely eliminating the error due to the integration, it has the disadvantage that we have to carry out the integration first before we can write the program which will be specific to the given payoff. We have implemented this approach for call options (the code is given in appendix C) but found no significant improvement in run time (it runs about 1% faster).

The astute reader will note that there is one problem with the above algorithm in that it is unstable as $\epsilon \tendsto 0$ since in this case $\frac{\delta y}{\epsilon} =
O\left(\sqrt{\frac{1}{\epsilon}}\right)$. In the case of our simulations this is not a problem as our step size was always large enough to avoid this problem. If this is a problem in any simulation, this can be easily handled as the propagator can always be written as a functional of the paths as given in (4.64). We can then generate paths for $V$ or $y$ using (4.6), store the functionals (we will now need 5 terms $t_1 =
\omega$ (which is the same as above), $t_2 = \theta = \sum_{i=1}^N
e^{y_i(1/2-\alpha)}$, $t_3 = \eta = \sum_{i=1}^N
e^{y_i(3/2-\alpha)}$, $t_4 = \zeta = \sum_{i=1}^N e^{y_i(\alpha -
1/2)}$ and $t_5 = e^{y_0(3/2-\alpha)}$). We can then proceed to find the propagator as above (of coure, now writing $S_1$ as a function of $t_i,\, i = 1,\dots, 5$) and integrate over the final payoff. For the special case of the European call option, one can make this method slightly more efficient by averaging over the generalised Black-Scholes prices (4.65). We did not use this method since it takes longer to compute the propagator using five terms and because the memory requirements are higher.

A Comparison of Our Algorithm with Standard Monte Carlo Techniques

The normal way of doing Monte-Carlo simulations to find option prices with stochastic volatility involves directly simulating the process
\begin{align}
dS = rS dt + \sqrt{V} S Wdt\\
dV = (\lambda + \mu V)dt + \xi V^\alpha Qdt
\end{align}
by discretising it using the Euler method to
\begin{align}
\Delta V_i = (\lambda + \mu V_i)\Delta t + \xi V_i^\alpha \epsilon...
...elta S_i = rS_i \Delta t + \sqrt{V_i} S_i \epsilon_2 \sqrt{\Delta
t}
\end{align}
where the standard normally distributed random variables $\epsilon_1$ and $\epsilon_2$ with correlation $\rho$ are generated by first generating two uncorrelated standard normally distributed random variables $\delta_1$ and $\delta_2$ and using $\epsilon_1 = \delta_1$ and $\epsilon_2 = \rho \delta_1 + \sqrt{1-\rho^2} \delta_2$. The final values of $S$ are stored and the option price with strike price $K$ is estimated by considering $E[\max(S-K, 0)]$. This is the algorithm used along with the control variate method in Johnson and Shanno[31] (we henceforth call the above algorithm the standard algorithm).

Before we compare the efficiency of our algorithm with the standard one, we look at the possible sources of error and how the error due to each source scales with run time for both. The major source of error for both algorithms is the Monte Carlo error which goes as $N^{-1/2}$ and which goes as the square root of the run time. Another common source of error in both the algorithms is the discretization of the process for $y$ or $V$. The error in $y$ is then of the order of $h^{1/2}$ where $h$ is the time step used. However, the effect of this error on the option price is virtually impossible to estimate and we verify that the number of time steps is adequate empirically by comparing two simulations with different numbers of time steps. The standard algorithm has a further error due to the discretization of the $S$ process. The error is again of the order of $h^{1/2}$. The effect of this error on the option price is of the same order as the payoff of the option is piecewise linear with respect to $S$. Hence, this error also goes inversely as the square root of the run time. Our algorithm has other errors due to the finiteness of the integration over $x$, the interpolation error and the quadrature error which goes as $h^4$. The error due to the finiteness of the quadrature domain can be made completely negligible with minimal effort as the propagator goes as $e^{-d^2}$ where $d = x-x'$. Hence, for large enough $d$, this error goes as $e^{-t^2}$ where $t$ is the run time. This is a truly negligible error. The interpolation error is difficult to estimate but we empirically show that it is very small for interpolation over 100 points as the propagator is very smooth. The quadrature error goes as $h^4$ (Simpson's rule). Since most of the computer time is spent on the Monte Carlo simulation, we can also make this error very small with only a small increase in run time. Hence, we see that one of the main advantages of our algorithm is that it has a significantly lower error for the same number of configurations (since it has no error due to the $S$ simulation since these degrees of freedom have been integrated out).

When generating a single set of option prices with the same error tolerance, our algorithm is about 30 times faster. However, our algorirhm has an important advantage in that $t_1$ and $t_2$ are independent of $\rho$. Hence, when we generate sets of data with all parameters except $\rho$ fixed, we only need to calculate the new propagator using the terms and integrate over the final payoff. This effectively results in an increase in efficiency of a factor of six to seven when we calculate the prices for 10 different values of $\rho$. Hence, when generating data with several values of $\rho$, our algorithm is about two hundred times faster.

For 50,000 configurations and 128 time steps, the time taken by our program on a Silicon Graphics $\text{Indigo}^2$ with a MIPS R4400 processor running at 250MHz is 96.8s and the standard error for an at the money 90 day option when $S_0 = 100$, $V_0 = 0.0625$, $\alpha =
0.5$, $\xi = 0.1$ and $\rho = 0.5$ is $0.006. For a similar simulation using exactly the same parameters and initial values, the standard algorithm takes 91.0s and has a standard error of $0.038 for the at the money option. Thus, for a similar run time, the standard error using our algorithm is about a factor of 6 smaller. Hence, to achieve a similar accuracy, the standard algorithm takes about 36 times longer. (The program involving the generalised Black-Scholes prices runs in 95s for the same values of the parameters.)

The control variate method employed in Johnson and Shanno[31] reduces the variance in the standard algorithm by a factor of about 20 making it approximately that many times faster. In that case, we see that our algorithm is only slightly better when calculating one set of option prices, but is still significantly when simulating several sets for different $\rho$.

Checks on the Accuracy of the Program

It is well known that almost all non-trivial programs have bugs when first developed. Our program was no exception to this rule and a fair amount of time was spent in rectifying programming and other mistakes. We carried out several tests which we now document here which demonstrate that the programs as they stand are accurate. All the sources of error were considered and checked to see that they were under control.

Figure 5.1: Test of the accuracy of our program for zero correlation. The graph on the left is the comparison of the results produced using our algorithm and the ``straightforward'' solution using exactly the same paths.
\begin{figure}\begin{tabular}{cc}
\epsfig{file = ../test1.eps, width = 7.5cm}&
\epsfig{file = ../test2.eps, width = 7.5cm}\\
\end{tabular}\end{figure}

One straightforward test that we applied was to check that our program gave the same results for zero correlation as the straightforward solution presented in chapter 5. Since the volatility paths must also be generated for the straightforward solution we used the same random number seed, time steps and number of configurations. This provides a rigorous test that the intermediate errors in our program (due to the finiteness of the integration domain and the numerical quadrature) are well under control. We present two of these tests, one with the same random number seed and one without in figure 5.1. We see that most of the error is due to the Monte Calo simulation rather than the integration over the final payoff. When we consider the fact that the Monte Carlo error is significantly higher for non-zero $\rho$, this shows that the interpolation and quadrature error are very small.

For the non-zero correlation case, we have also compared our simulation results with those in Heston[20] and Johnson and Shanno[31] and have always found agreement between our results and theirs within two standard errors (the 95% confidence interval). We have also compared the results of our program with the results from the ``standard algorithm'' presented in the previous chapter. Hence, we conclude that our algorithm is essentially correct.


next up previous contents
Next: The Results Up: thesis Previous: Stochastic Volatility   Contents
Marakani Srikant 2000-08-15