Thursday, 17 January 2019

LIBOR fallback: a recognised expertise

Over the coming years, one of the main issues in interest rate trading and risk management will be the emergence of new benchmarks and the potential IBORs fallback.

In the past year, we have looked at those issues from a theoretical and from a practical point of view. Some of the theoretical issues are detailed in a note that Marc published recently called "A Quant Perspective on IBOR Fallback Consultation Results" (https://ssrn.com/abstract=3226183). On the practical side we have implemented different fallback options, curve calibration mechanism and convexity adjustment to analyze the impacts on large portfolios. Some descriptions of the tools are available in a series of previous blogs.

Our expertise has been recognized by the market as indicated from the invitations to most of the quantitative finance conferences in Europe over the next months. Marc has been invited as guest speaker or expert panelist at the following events:
  • The future of LIBOR (CFA Society Denmark, Copenhagen, Denmark) - 24 January 2019
  • CASS Financial Engineering seminar (CASS Business School, London, UK) - 6 February 2019
  • Quant Summit (organized by Risk.Net, London, UK) - 6-7 March 2019
  • QuantMinds International (Vienna, Austria) - 14-16 May 2019
  • Risk Live (organized by Risk.Net, London, UK) - 27 June 2019
We are also presenting in-house workshops on similar subjects (see our training page on LIBOR) at different financial institutions in Europe.

Don't hesitate to contact us if you want to discuss potential in-house training or providing expertise on this subject for your projects.

Marc's presentation at CASS Financial Engineering workshop

Marc Henrard will present a seminar at the CASS Financial Engineering workshops. The presentation will take place on Wednesday 6 February 2019 at 6:10 pm.

Marc's talk, will be titled  
A quant perspective on LIBOR fallback.

Talk's abstract:
With the increased expectation of some IBORs discontinuation and the increasing regulatory requirements related to benchmarks, a more robust fallback provision for benchmark-linked derivatives is becoming paramount for the interest rate market. Several options for such a fallback have been proposed and ISDA held a consultation on some of them. The results of the ISDA consultation has been to select the ``compounding setting in arrears" option. We analyse the proposed option in details and present an alternative option supported by different working groups. The presentation focuses is on the quantitative finance impacts for derivatives. To our opinion, the option selected by the consultation fails the basic achievability criterion in many cases. Even when achievable, the option can lead to significant value transfer and risk management complexities. We also explain to which extend the fallback may transform some vanilla instruments into exotics.

Thursday, 10 January 2019

Updated Quant perspective on LIBOR fallback

The note on IBOR fallback in our series Market Infrastructure Analysis as been updated with the results of the ISDA consultation. The note, titled

A quant perspective on IBOR fallback consultation results,

is available on SSRN: http://ssrn.com/abstract=3308766

Abstract

With the increased expectation of some IBORs discontinuation and the increasing regulatory requirements related to benchmarks, a more robust fallback provision for benchmark-linked derivatives is becoming paramount for the interest rate market. Several options for such a fallback have been proposed and ISDA held a consultation on some of them. The results of the ISDA consultation has been to privilege the ``compounding setting in arrears" option. This note, which can be view as the version 2.0 of a previous note, presents the different options briefly and analyses the privileged option in details. It also presents an alternative option supported by different working groups. The note's focus is on the quantitative finance impacts for derivatives. To our opinion, the option selected by the consultation fails the basic achievability criterion in many cases. Even when achievable, the option can lead to significant value transfer and risk management complexities.

Tuesday, 1 January 2019

MVA and cost of funding

In the previous blog on Margin Value Adjustment (MVA), we have shown that, for linear products, the forward Initial Margin (IM) along the different paths of Monte Carlo simulation may not be very different. This was visible in Figure 2 of the previous blog.

The IM level is only one half of the MVA, even if this is the one that is often the most computationally intensive. The other half is the cost of funding the IM. For illustration purposes, we use in this blog a cost of funding equal to the spread between OIS over a quarterly period and LIBOR. Obviously each institution need to had some idiosyncratic spread on top of that. But for illustration purposes, this simplified approach is enough.

When we have generated the paths on which we have computed the IM, we have used an interest rate model. Even if our portfolio contained only interest rate swaps linked to a unique IBOR rate, it is important to generate the paths with a model that take into account the stochastic spread between IBOR and OIS in a realistic way. The spread is used in the valuation/IM measurement, but more importantly in our case it is also used in the cost of funding computation. This is why the model used should match the current spread, take into account the spread dynamic (volatility) and have a realistic co-dependence (correlation) between IBOR and OIS curves.

For this blog, we have used a relatively simple model that fulfills those requirements: an hybrid Model for the Dynamic Multi-Curve Framework as described in a recent Model Development document. The model includes the spread stochasticity and the correlation between rate level and spread. For the examples we provide below, the model has been calibrated to USD cap/floor for the IBOR rates dynamic and to historical spreads and correlations behavior.

We first repeat the underwhelming graph which represent the IM level of our portfolio (with a small number of paths).




Figure 1: An example of forward IM paths for a swap portfolio.


For the same paths, we have computed the quarterly cost of funding as the USD-LIBOR-3M/OIS-3M spread applied over the quarter to the IM. The cost are reproduced in the graph below. The colors for each path are the same in both graphs.



Figure 2: An example of IM cost for a LIBOR-OIS related funding cost.

As appears clearly in that graph, the spread has more impact on the MVA than the precise IM level. In modelling term, it is very important to have a realistic multi-curve model with stochastic spread to which a good understanding of the dependence of the idiosyncratic institution specific funding spread should be added.



Other blogs on IM and MVA:

Friday, 21 December 2018

Forward Initial Margin and multiple layers of AD

In the previous blog, I presented the application of AD to the question of Initial Margin (or Capital) attribution between desks in risk-weight based measures. In this installment, I incorporate this feature into a Monte Carlo forward IM computation mechanism. The Monte Carlo forward IM is one of the approach to compute Margin Value Adjustment (MVA). The full MVA also requires the introduction of the cost of funding (the IM) and the discouting; the funding will be the focus of the next installment.

The steps to obtain the forward IM in a Monte Carlo approach for interest rates in risk-weight based measures are the following:
  1. Calibrate a multi-curve framework and a dynamic model
  2. Evolve the curves to a sample of future dates using random scenarios
  3. For each date and scenario, compute the sensitivities (market quotes deltas or bucketed PV01) of the portfolio
  4. Computes the IM for each counterparty based on the sensitivities and apply the sub-portfolio attribution (see last blog)
The next steps, to obtain the MVA will be dealt with in the next blog.

Calibration


The calibration of a multi-curve framework from market quotes is a standard procedure. I refer to my book on the multi-cure framework, Chapter 5, for the details. Note that Algorithmic Differentiation (AD) is already important at this stage. The calibration procedure is often done using root-finding algorithm of the Newton type. This requires the computation of the gradient of the market quotes function. This is done efficiently with AD. A multi-curve dynamic model is required and similarly it needs to be calibrated to the market.

For this blog, I'm using an hybrid multi-curve model, as described in the recent working paper Hybrid Model: A Dynamic Multi-Curve Framework. This is a relatively simple model than can be calibrated to the term structure of volatilities and includes a stochastic basis between LIBOR rates and OIS rates. This feature will be important when discussing cost of funding in the next instalment.

Evolution


To evolve the curves, we use very standard techniques. The model describes the curves (OIS discount factors and LIBOR processes) at a future date in an explicit way based on Gaussian distributions. It is easy to obtain the value of discount factors and LIBOR processes at those forward dates.

Sensitivities


The next step, obtaining the sensitivities of the portfolio with respect to the market quotes in the forward scenario, is probably less standard in derivative pricing. Most of derivative valuation is based on values and cash flows, not on risk measures. The technical requirements are not be very different: we have a model evolved curve and we want to compute a result that depends on those curves. But there is an extra catch, what we need for the risk-weight based measure is the sensitivity with respect to the methodology select market quotes, not arbitrary model parameters. Those market quotes are ot provided directly by the model. Fortunately, even if this is no something that we do explicitly in many places, this is something we do implicitly. With a variation of AD techniques this can be implemented efficiently. This can be obtained by a mixture of the chapters 5 and 6 ("Derivatives to non-inputs and non-derivatives to inputs" and "Calibration") from my Algorithmic Differentiation in Finance Explained book.

Attribution


Once the sensitivities are computed for each trade at each date ad for each scenario, the measure and his attribution by portfolio is simply applying the techniques described in the previous blog.

Figures


A picture is worth a thousand words. So let's put the above ideas in pictures.

First we take only one Monte Carlo scenario and look at the attribution. I have selected a small portfolio with one counterparty containing 30 swaps split between 6 sub-portfolios. The attribution is done on the different sub-portfolios.

The total IM is represented in red. The attribution is done using the Euler method described in the previous blog. With this attribution method, the offsets between positions are taken into account. This explain why some desks (Desk 2 and Desk 6) have negative attributions.


Figure 1: Forward IM attribution between desks.


What we can also see is that the relation between the different attributions change through time. Today, desk 1 is the largest, while through time, desk 2 is becoming the largest and even the only meaningful contributor after 8 years. This emphasises that a MVA attribution based only on today's spot IM attribution would not provide a faire representation of the contributions. The attribution along the path is really required.

Once we have done the attribution on one path, we can look at how the forward IM behaves along the different paths. In this case, we have kept the IM methodology unchanged through the life of the portfolios. In practice, the model parameters are reviewed on a regular basis (at least annually in case of SIMM). We should introduce change of methodology along the paths. This is not done here and may be the subject of another blog at a latter stage.

An example with 7 paths is proposed in the graph below. We use only a small number of paths to avoid overloading the pictures. The performance analysis will be done with more paths.


Figure 2: Forward IM along different paths

The least we can say is that the graph is underwhelming. This can be explain easily as our portfolio contains only vanilla swaps the present values of which are almost linear in the underlying market quotes. As the IM methodology is sensitivity (first order derivatives) based, the IM numbers do not change significantly from one path to another. This does not mean that multiple paths are unnecessary for MVA, as we will discuss in the next blog.

Obviously a financial institution will have more than one counterparty. The next figure reproduce the example of the forward computation with three different counterparties.


Figure 3: Forward IM for different counterparties


Performance


What is the performance of such an implementation and where are the bottlenecks?

We have ran the above approach on a portfolio of 90 swaps split between 3 counterparties and 6 sub-portfolios. The horizon is 11 years with semi-annual dates and 101 paths. The total computation time (1) was 18s. The split is:
  • Calibration in 340 ms.
  • Loaded trades in 88 ms.
  • Path random variables in 11 ms.
  • Paths fixings in 348 ms.
  • IMs in 17430 ms.

The first line is the original calibration of the curves from market data stored in CSV files. The second line is the loading of the portfolio from a csv file. The model is a two-factor model based on Gaussian distribution, generating the underlying random variable was 11 ms. As the trades we want to value age through the different dates, for each path we need to generate a full time series of fixing consistently with the model used; not only on the path date but on all intermediary dates as a swap can have a fixing at any date. As we have OIS in the portfolio, in practice really each single date will be required. That is also relatively quick (348 ms).

As we expect, the bulk of the time is spent computing the sensitivities and combining them in the IM. One of the time consuming task is computing the market quote Jacobian matrices required to obtain the sensitivities to market quotes, even if the model does not provide the market quotes directly. In our example we use two curves (OIS and LIBOR) with 12 nodes. Computing the Jacobian is similar to computing 24 swaps parameter sensitivities, each with respect to 24 nodes and inverting a matrix. The inversion itself is almost irrelevant in term of computation time. We are left with the parameter sensitivities. In our implementation, this is done by AD and it takes around 6 PV time while a finite difference would take 25 PV times. A gain of a factor 4 for this task.

Then there are the sensitivities of the 90 swaps in the portfolio. The computation time was around twice the Jacobian computation time. The swap in the portfolio are not all long term, so the ratio with the Jacobian is the right size. Like for the previous element, the gain here is probably a factor 4 thanks for AD. This emphasises that for computational efficiency reason, it is better to run the simulation for all counterparties in one run. One of the time consuming task, the Jacobian computation, is common to all counterparties. Note that the representation of the swap here is the full representation with all the conventions, holidays and idiosyncratic details.

We come finally to the object of the previous blog, which was attribution. The computation of the IM itself, the marginal of each exposure and the attribution to 6 sub-portfolios took around 10 times less computation time than the Jacobian. The use of AD here has probably brought a gain of a factor 2 or 3, but this is almost inconsequential as the IM computation time from the sensitivities is dwarfed by the computation time of the sensitivities.

Conclusions


On the performance side, the computation of forward IM using a risk-weight based methodology through Monte Carlo approach is feasible in reasonable time. The AD implementation brings real benefits. More curves are involved, more benefits it will bring. The measure computation from sensitivity itself is relatively fast and improvement to that computation are almost invisible in the final computation time.

On the business side, doing the attribution at each forward date is important to attribute the MVA correctly. A simple attribution based on the spot IM would provide unreliable results.


In forthcoming blogs we will look at the cost of funding, the change of the IM methodology parameters through time and the computation of marginal MVA.



(1) Time computed on the author laptop (MacBook Pro 13' , 3.1 GHz Intel Core i5). Personal Java code on a single thread.

Wednesday, 12 December 2018

Initial margin and double AD

The use of Algorithmic Differentiation (AD) in finance as become more popular in the last 5 to 10 years. AD can be described as "the art of calculating the differentiation of functions with a computer". An introduction to AD in finance can be found in my book with the same title.

The efficient computation of derivatives has been traditionally used in finance to compute the "greeks" associated to financial instruments and in particular the deltas or bucketed PV01.

Recent regulations have pushed in the direction of more computation of cost of capital for market risk (FRTB) and Initial Margin for uncleared trades (Uncleared Margin Regulation - UMR). The method used by most financial institutions already under the UMR is the ISDA proposed SIMM™ approach. The approach is very similar to FRTB capital computation with some small twists. The base idea of both is to compute a VaR-like number based on conventional risk weights and correlations. This is equivalent to a delta-normal VaR computation in the RiskMetrics style but with variance-covariance matrix in a stylized format with prescribed values. I will use the generic term of risk-weight based measure for those capital or IM methodologies.

The "delta" part of those methodologies is relying of the computation of PV01. This is where AD has been traditionally used in finance. This is the first layer of AD related to risk-weight based measure methodologies. As this is relatively standard, I will not focus on this aspect in this blog.

Marginal measure


A second topic for which Algorithmic Differentiation can bring significant improvements is the topic of marginal risk measure and measure attribution. The marginal measure is the increase in the measure coming from adding a small sensitivity (or trade) to the existing portfolio. This is the derivative of the measure with respect to an increase in the sensitivity/exposure. This marginal measure can be computed at the single sensitivity level or at the trade level or at any combination of trades level. In the rest of the blog, I will consider the marginal measure at the most atomic level of our problem, the level of a single sensitivity. Obviously if the marginal measure is available at the lowest level, the marginals can be combined to obtained the marginals at any level above that. From a computational perspective, the lowest level of marginals is the most expensive and if we can solve it cheaply, then we can solve any other combination cheaply also.

Euler attribution


The marginal measure is also closely linked one standard method of attribution, the attribution method called "Euler attribution".

In general a measure (Capital or IM) attribution between sub-portfolio is a way to divide in an additive way the total measure of a portfolio between different sub-portfolios.

The Euler attribution is based on the Euler's homogeneous function theorem. The theorem provides an equality for positively homogeneous functions. The standard approaches to capital, IM and VaR are in most cases positively homogeneous. This is the case for FRTB, SIMM (below the concentration risk threshold) and Delta-Normal VaR.

What are we trying to do with attribution? We start with a portfolio made of sub-portfolios. We have k sub-portfolios denoted Pi and the total portfolio is P:
P = Σi=1,...,k Pi = Σi=1,...,k 1 x Pi

We want to split the measure for the total portfolio in an additive way between the different sub-portfolios.  We cannot use directly the measure of each sub-portfolio as the measure itself is not additive.

The following equality, called Euler's homogeneous function formula, is satisfied for positively homogeneous functions
f(x) = Σi=1,...,k xi Di f(x)

We have a function which represents the measures μ on portfolios
f(X) = f((Xi)i=1,...,k) = μ(Σi=1,...,k Xi x Pi)

The measure applied to the total portfolio is
μ(P) = f(1,1,...,1)

Euler's theorem suggests an attribution based on
μ(P) = Σi=1,...,k 1 x Di f(1,1,...,1)

One of the reason this attribution is used is that it takes into account the offsets between sub-portfolios.

If you have the derivatives of the function f with respect to each individual sensitivity in the sub-portfolios, obtaining Di f(1,1,...,1) is simply the question of adding numbers for the sensitivities in the sub-portfolio.

Performance example 


What is the performance in practice of this method combined with AD? For this I have used a simple portfolio with 20 sub-portfolios and 500 exposures each. This is a total of 10,000 exposures. The measure selected is an IM computed using the SIMM methodology.

If we compute(1) a single IM for the portfolio (10,000 exposures), the computation time is 3.4 ms. If we were to compute the marginal IM of each exposure by finite difference, it would multiply the computation time by 10,000 (34,000 ms). If we were to compute the marginal IM for each sub-portfolio by finite difference it would multiply the computation time by 21 (714 ms).

What do we obtain by Algorithmic Differentiation? For the above portfolio, the time required for the measure, all the 10,000 marginal IM and the 20 sub-portfolios attribution is 10.3 ms. Obtaining all those 10,000+ figures multiplies the computation time only by 3. This is in line with the theory (on the good side of the range). This is more than 3,000 time faster than by finite difference!

Savings from full marginal IM : 3,000 times shorter computation time
Savings from full IM attribution: 7 times shorter computation time

Conclusion


Using AD at two levels for risk-weight and correlation based risk measures improve significantly the computation time for marginal measures and attribution.

In a forthcoming blog, we will combine that with other uses of AD in MVA computations. We will add a third layer of AD. But that will probably be after the Christmas period.


(1) We have run all computations described 100 times in a loop and the figures reported are the averages by IM computation. If we run it only once, the times are too small. All times reported measured on the author's laptop running personal Java code.



Material similar to the one described in this blog was presented at the WBS xVA conference in March 2017 and at a Thalesians seminar in April 2017, that seminar that led The Wall Street Journal to use my picture (incorrectly to my opinion) in the article "The Quants Run Wall Street Now".

Saturday, 8 December 2018

Copenhagen Risk conference and workshop - 23-24 January 2019

Marc Henrard will present a seminar at the conference

CFA Society Denmark Risk Conference

which will take place on Wednesday 23 January 2019. The agenda of the conference can be found on the organizer web site:




On the next day, Thursday 24 January 2019, he will present the workshop

The future of LIBOR: Quantitative perspective on benchmarks, overnight, fallback and regulation.

The agenda of the workshop and registration details can be found on the organizer web site:


Marc will be in Copenhagen from 22 to 24 January. Don't hesitate to reach out if you want to meet during that time.


Wednesday, 5 December 2018

Event

Marc Henrard will attend the conference

Annual Forecast Event

which will take place at The Hotel Brussels on Monday 10 December 2018. The agenda of the conference can be found on the organizer web site:


Don't hesitate to reach out to Marc at the conference.

Thursday, 29 November 2018

Course "The future of LIBOR: Quantitative perspective on benchmarks, overnight, fallback and regulation"


Following the request by several clients, we have developed a training/workshop around the new benchmarks and the LIBOR fallback. A typical agenda of the course is presented below.

  • Cash-collateral discounting. 
    • The standard collateral results and their exact application. 
    • What is hidden behind OIS discounting (and when it cannot be used)?
    • Impact of new benchmarks on valuation
  • EU Benchmark regulation
  • The``alternative'' benchmarks:
    • Progress in different jurisdictions
    • SOFR, reformed SONIA, ESTER, SARON, TONAR.
    • Secured v unsecured choice.
    • What about term rates?
    • Curve calibration
    • SOFR and EFFR: two overnight rates in one currency!
  • Status in different currencies. Cleared OTC products, liquidity. The different consultations in progress and what to expect from them.
  • Fallback options
    • ISDA consultation
    • The different options for the "adjusted rate"
    • The different options for the "adjustment spread"
    • Quantitative impacts: convexity adjustments and risk
    • Clearing house adoption
  • Risk management of transition.
    • Risk impacts
    • Potential impacts on systems
    • What a risk solution would look like?
    • Multi-curve: double or quit?
    • Interest rate modelling
  • New products associated to new benchmarks
    • Futures on overnight benchmarks
    • Deliverable swap futures
Detailed lecture notes for participants.

The training is usually proposed as a one-day program.



Don't fallback, step forward!

Contact us for our LIBOR fallback training and quant solutions.



Other course proposal available on our Training Page.