Friday 21 December 2018

Forward Initial Margin and multiple layers of AD

In the previous blog, I presented the application of AD to the question of Initial Margin (or Capital) attribution between desks in risk-weight based measures. In this installment, I incorporate this feature into a Monte Carlo forward IM computation mechanism. The Monte Carlo forward IM is one of the approach to compute Margin Value Adjustment (MVA). The full MVA also requires the introduction of the cost of funding (the IM) and the discouting; the funding will be the focus of the next installment.

The steps to obtain the forward IM in a Monte Carlo approach for interest rates in risk-weight based measures are the following:
  1. Calibrate a multi-curve framework and a dynamic model
  2. Evolve the curves to a sample of future dates using random scenarios
  3. For each date and scenario, compute the sensitivities (market quotes deltas or bucketed PV01) of the portfolio
  4. Computes the IM for each counterparty based on the sensitivities and apply the sub-portfolio attribution (see last blog)
The next steps, to obtain the MVA will be dealt with in the next blog.

Calibration


The calibration of a multi-curve framework from market quotes is a standard procedure. I refer to my book on the multi-cure framework, Chapter 5, for the details. Note that Algorithmic Differentiation (AD) is already important at this stage. The calibration procedure is often done using root-finding algorithm of the Newton type. This requires the computation of the gradient of the market quotes function. This is done efficiently with AD. A multi-curve dynamic model is required and similarly it needs to be calibrated to the market.

For this blog, I'm using an hybrid multi-curve model, as described in the recent working paper Hybrid Model: A Dynamic Multi-Curve Framework. This is a relatively simple model than can be calibrated to the term structure of volatilities and includes a stochastic basis between LIBOR rates and OIS rates. This feature will be important when discussing cost of funding in the next instalment.

Evolution


To evolve the curves, we use very standard techniques. The model describes the curves (OIS discount factors and LIBOR processes) at a future date in an explicit way based on Gaussian distributions. It is easy to obtain the value of discount factors and LIBOR processes at those forward dates.

Sensitivities


The next step, obtaining the sensitivities of the portfolio with respect to the market quotes in the forward scenario, is probably less standard in derivative pricing. Most of derivative valuation is based on values and cash flows, not on risk measures. The technical requirements are not be very different: we have a model evolved curve and we want to compute a result that depends on those curves. But there is an extra catch, what we need for the risk-weight based measure is the sensitivity with respect to the methodology select market quotes, not arbitrary model parameters. Those market quotes are ot provided directly by the model. Fortunately, even if this is no something that we do explicitly in many places, this is something we do implicitly. With a variation of AD techniques this can be implemented efficiently. This can be obtained by a mixture of the chapters 5 and 6 ("Derivatives to non-inputs and non-derivatives to inputs" and "Calibration") from my Algorithmic Differentiation in Finance Explained book.

Attribution


Once the sensitivities are computed for each trade at each date ad for each scenario, the measure and his attribution by portfolio is simply applying the techniques described in the previous blog.

Figures


A picture is worth a thousand words. So let's put the above ideas in pictures.

First we take only one Monte Carlo scenario and look at the attribution. I have selected a small portfolio with one counterparty containing 30 swaps split between 6 sub-portfolios. The attribution is done on the different sub-portfolios.

The total IM is represented in red. The attribution is done using the Euler method described in the previous blog. With this attribution method, the offsets between positions are taken into account. This explain why some desks (Desk 2 and Desk 6) have negative attributions.


Figure 1: Forward IM attribution between desks.


What we can also see is that the relation between the different attributions change through time. Today, desk 1 is the largest, while through time, desk 2 is becoming the largest and even the only meaningful contributor after 8 years. This emphasises that a MVA attribution based only on today's spot IM attribution would not provide a faire representation of the contributions. The attribution along the path is really required.

Once we have done the attribution on one path, we can look at how the forward IM behaves along the different paths. In this case, we have kept the IM methodology unchanged through the life of the portfolios. In practice, the model parameters are reviewed on a regular basis (at least annually in case of SIMM). We should introduce change of methodology along the paths. This is not done here and may be the subject of another blog at a latter stage.

An example with 7 paths is proposed in the graph below. We use only a small number of paths to avoid overloading the pictures. The performance analysis will be done with more paths.


Figure 2: Forward IM along different paths

The least we can say is that the graph is underwhelming. This can be explain easily as our portfolio contains only vanilla swaps the present values of which are almost linear in the underlying market quotes. As the IM methodology is sensitivity (first order derivatives) based, the IM numbers do not change significantly from one path to another. This does not mean that multiple paths are unnecessary for MVA, as we will discuss in the next blog.

Obviously a financial institution will have more than one counterparty. The next figure reproduce the example of the forward computation with three different counterparties.


Figure 3: Forward IM for different counterparties


Performance


What is the performance of such an implementation and where are the bottlenecks?

We have ran the above approach on a portfolio of 90 swaps split between 3 counterparties and 6 sub-portfolios. The horizon is 11 years with semi-annual dates and 101 paths. The total computation time (1) was 18s. The split is:
  • Calibration in 340 ms.
  • Loaded trades in 88 ms.
  • Path random variables in 11 ms.
  • Paths fixings in 348 ms.
  • IMs in 17430 ms.

The first line is the original calibration of the curves from market data stored in CSV files. The second line is the loading of the portfolio from a csv file. The model is a two-factor model based on Gaussian distribution, generating the underlying random variable was 11 ms. As the trades we want to value age through the different dates, for each path we need to generate a full time series of fixing consistently with the model used; not only on the path date but on all intermediary dates as a swap can have a fixing at any date. As we have OIS in the portfolio, in practice really each single date will be required. That is also relatively quick (348 ms).

As we expect, the bulk of the time is spent computing the sensitivities and combining them in the IM. One of the time consuming task is computing the market quote Jacobian matrices required to obtain the sensitivities to market quotes, even if the model does not provide the market quotes directly. In our example we use two curves (OIS and LIBOR) with 12 nodes. Computing the Jacobian is similar to computing 24 swaps parameter sensitivities, each with respect to 24 nodes and inverting a matrix. The inversion itself is almost irrelevant in term of computation time. We are left with the parameter sensitivities. In our implementation, this is done by AD and it takes around 6 PV time while a finite difference would take 25 PV times. A gain of a factor 4 for this task.

Then there are the sensitivities of the 90 swaps in the portfolio. The computation time was around twice the Jacobian computation time. The swap in the portfolio are not all long term, so the ratio with the Jacobian is the right size. Like for the previous element, the gain here is probably a factor 4 thanks for AD. This emphasises that for computational efficiency reason, it is better to run the simulation for all counterparties in one run. One of the time consuming task, the Jacobian computation, is common to all counterparties. Note that the representation of the swap here is the full representation with all the conventions, holidays and idiosyncratic details.

We come finally to the object of the previous blog, which was attribution. The computation of the IM itself, the marginal of each exposure and the attribution to 6 sub-portfolios took around 10 times less computation time than the Jacobian. The use of AD here has probably brought a gain of a factor 2 or 3, but this is almost inconsequential as the IM computation time from the sensitivities is dwarfed by the computation time of the sensitivities.

Conclusions


On the performance side, the computation of forward IM using a risk-weight based methodology through Monte Carlo approach is feasible in reasonable time. The AD implementation brings real benefits. More curves are involved, more benefits it will bring. The measure computation from sensitivity itself is relatively fast and improvement to that computation are almost invisible in the final computation time.

On the business side, doing the attribution at each forward date is important to attribute the MVA correctly. A simple attribution based on the spot IM would provide unreliable results.


In forthcoming blogs we will look at the cost of funding, the change of the IM methodology parameters through time and the computation of marginal MVA.



(1) Time computed on the author laptop (MacBook Pro 13' , 3.1 GHz Intel Core i5). Personal Java code on a single thread.