The efficient computation of derivatives has been traditionally used in finance to compute the "greeks" associated to financial instruments and in particular the deltas or bucketed PV01.
Recent regulations have pushed in the direction of more computation of cost of capital for market risk (FRTB) and Initial Margin for uncleared trades (Uncleared Margin Regulation - UMR). The method used by most financial institutions already under the UMR is the ISDA proposed SIMM approach. The approach is very similar to FRTB capital computation with some small twists. The base idea of both is to compute a VaR-like number based on conventional risk weights and correlations. This is equivalent to a delta-normal VaR computation in the RiskMetrics style but with variance-covariance matrix in a stylized format with prescribed values. I will use the generic term of risk-weight based measure for those capital or IM methodologies.
The "delta" part of those methodologies is relying of the computation of PV01. This is where AD has been traditionally used in finance. This is the first layer of AD related to risk-weight based measure methodologies. As this is relatively standard, I will not focus on this aspect in this blog.
Marginal measure
A second topic for which Algorithmic Differentiation can bring significant improvements is the topic of marginal risk measure and measure attribution. The marginal measure is the increase in the measure coming from adding a small sensitivity (or trade) to the existing portfolio. This is the derivative of the measure with respect to an increase in the sensitivity/exposure. This marginal measure can be computed at the single sensitivity level or at the trade level or at any combination of trades level. In the rest of the blog, I will consider the marginal measure at the most atomic level of our problem, the level of a single sensitivity. Obviously if the marginal measure is available at the lowest level, the marginals can be combined to obtained the marginals at any level above that. From a computational perspective, the lowest level of marginals is the most expensive and if we can solve it cheaply, then we can solve any other combination cheaply also.
Euler attribution
The marginal measure is also closely linked one standard method of attribution, the attribution method called "Euler attribution".
In general a measure (Capital or IM) attribution between sub-portfolio is a way to divide in an additive way the total measure of a portfolio between different sub-portfolios.
The Euler attribution is based on the Euler's homogeneous function theorem. The theorem provides an equality for positively homogeneous functions. The standard approaches to capital, IM and VaR are in most cases positively homogeneous. This is the case for FRTB, SIMM (below the concentration risk threshold) and Delta-Normal VaR.
What are we trying to do with attribution? We start with a portfolio made of sub-portfolios. We have k sub-portfolios denoted Pi and the total portfolio is P:
P = Σi=1,...,k Pi = Σi=1,...,k 1 x Pi
We want to split the measure for the total portfolio in an additive way between the different sub-portfolios. We cannot use directly the measure of each sub-portfolio as the measure itself is not additive.
The following equality, called Euler's homogeneous function formula, is satisfied for positively homogeneous functions
f(x) = Σi=1,...,k xi Di f(x)
We have a function which represents the measures μ on portfolios
f(X) = f((Xi)i=1,...,k) = μ(Σi=1,...,k Xi x Pi)
The measure applied to the total portfolio is
μ(P) = f(1,1,...,1)
Euler's theorem suggests an attribution based on
μ(P) = Σi=1,...,k 1 x Di f(1,1,...,1)
One of the reason this attribution is used is that it takes into account the offsets between sub-portfolios.
If you have the derivatives of the function f with respect to each individual sensitivity in the sub-portfolios, obtaining Di f(1,1,...,1) is simply the question of adding numbers for the sensitivities in the sub-portfolio.
Performance example
What is the performance in practice of this method combined with AD? For this I have used a simple portfolio with 20 sub-portfolios and 500 exposures each. This is a total of 10,000 exposures. The measure selected is an IM computed using the SIMM methodology.
If we compute(1) a single IM for the portfolio (10,000 exposures), the computation time is 3.4 ms. If we were to compute the marginal IM of each exposure by finite difference, it would multiply the computation time by 10,000 (34,000 ms). If we were to compute the marginal IM for each sub-portfolio by finite difference it would multiply the computation time by 21 (714 ms).
What do we obtain by Algorithmic Differentiation? For the above portfolio, the time required for the measure, all the 10,000 marginal IM and the 20 sub-portfolios attribution is 10.3 ms. Obtaining all those 10,000+ figures multiplies the computation time only by 3. This is in line with the theory (on the good side of the range). This is more than 3,000 time faster than by finite difference!
Savings from full marginal IM : 3,000 times shorter computation time
Savings from full IM attribution: 7 times shorter computation time
Conclusion
Using AD at two levels for risk-weight and correlation based risk measures improve significantly the computation time for marginal measures and attribution.
In a forthcoming blog, we will combine that with other uses of AD in MVA computations. We will add a third layer of AD. But that will probably be after the Christmas period.
(1) We have run all computations described 100 times in a loop and the figures reported are the averages by IM computation. If we run it only once, the times are too small. All times reported measured on the author's laptop running personal Java code.
Material similar to the one described in this blog was presented at the WBS xVA conference in March 2017 and at a Thalesians seminar in April 2017, that seminar that led The Wall Street Journal to use my picture (incorrectly to my opinion) in the article "The Quants Run Wall Street Now".