Estimators in the case of augmented GARCH processes: Asymptotic theory 

Get Complete Project Material File(s) Now! »

Predictive power for the risk using the SQP

We now focus on the main goal namely testing the SQP’s ability to assess the estimated future risk correctly – what we call predictive power. Let us briefly point out why this differs from the classical backtesting, which designates a statistical procedure that compares realizations with forecasts. First, recall that, for the VaR, some of the backtesting methods used by European banks and the Basel committee are based on the so-called violation process counting how many times the estimated/forecast dVaRn(a) has been violated by realized losses during the following year; those VaR ’violations’ should form a sequence of iid Bernoulli variables with probability (1 􀀀 a) (see e.g. [40] for simple binomial test, [85] for multinomial one). Here instead, we want to assess the risk, thus the quality of estimation. While the violation ratio in [43] gives a measure of the quality of the estimation (based on the violation process), here we do not question at all if the violation process is in line with the acceptable number of exceptions.
Taking the risk management point of view, we consider the estimated VaR directly as a capital amount (without referring to the confidence interval associated with the estimate). This is why, as pointed out earlier, we propose to interpret this estimate (a statistical number) as an observation of a SQP sample path. We will then judge the relevance of the obtained VaR by testing directly if the capital computed last year (via a VaR estimate) corresponds to that needed this year (the realized future value of the VaR). Note that it does not mean denying the uncertainty: To cope with it, as well as to show financial strength, a buffer on top of the risk adjusted capital is usually introduced by financial institutions (its evaluation depends on the institution own assessment and not by regulation).
This alternative way of looking at risk (additionally to the violation process), leads subsequently to a new way of assessing the quality of the risk estimation. Moreover, this method for risk quantification can be applied to any other risk measure besides the VaR (as for instance the Expected Shortfall).

Pro-cyclicality results on real data

As mentioned in the introduction, pro-cyclicality of risk measurements implies that, when the market is quiet, the future risk is under-estimated, while, during crisis, the future risk is over-estimated. We deal with this problem by first defining an indicator of market states, then evaluating the performance of the risk measure when conditioning it on this indicator.

Realized volatility as an indicator of market states

To better understand the dynamic behavior of the ratios, we condition them on past volatility. This is
inspired by the seminal empirical study on Foreign Exchange Rates done in [41], where the authors conditioned the returns on past volatility. On the one hand, they showed that after a period of high volatility, the returns tend to be negatively correlated. It means that the volatility should diminish in the future, with a distribution more concentrated around its mean; thus the quantiles should be smaller at given thresholds than current ones. It implies that the future risk would be overestimated when evaluated on this high volatility period. On the other hand, they showed that in times of low volatility, consecutive returns tend to be positively auto-correlated; thus the volatility should increase on the long term, with the consequence in the future of quantiles shifted to the right, meaning that our estimation would underestimate the future risk.
To be able to condition on volatility, we define an empirically measured annualized volatility, vk,T(t), taken over a rolling sample of size T year(s), corresponding to n business days, up to but not including a time t. By abuse of notation, we refer with t 􀀀 j to a point in time j days before time t (for any integer-valued j), which gives us: vk,T(t) = p n vk,n(t 􀀀 1) .

SQP predictive power as a function of volatility

In the case of the S&P 500, we represent in the same graph the time series of both the volatility vk,T(t) and the SQP ratios, separately for each of the thresholds 95% and 99%, to compare their behavior over time. We illustrate in Figure 2.3 these dynamics when considering e.g. the case of a 1-year sample and k = 1 for the volatility (MAD). One can easily observe on these graphs a strong opposite behavior for SQP ratios (in dark) and for volatility (in light). Also, the dynamic behavior appears clearly.
The realized SQP ratios can differ from 1 either on the upper side or on the lower side. The negative dependence between the realized SQP ratios and the volatility is obvious on both graphs. This particular feature is of importance in our study because it means that when the volatility in year t is high in the market, the realized SQP ratio in year t is quite low. Recalling the definition of the ratio, this situation means that the risks have been over-evaluated by the SQP calculated in year t  (in comparison to the realized risk in year t + 1). Conversely, when the volatility is low in year t, the realized SQP ratio is often much higher than 1 in this year, which means that the risks for the year t + 1 have been underevaluated by the calculation of the SQP in year t.
Another way of visualizing the pro-cyclicality is to plot the various realized SQP ratios as a function of volatility. We illustrate the obtained behavior in Figure 2.4, choosing the same parameters as in Figure 2.3. Such figures highlight better the existing negative dependence between these two quantities.
The plots of the first row of Figure 2.4 point out that the (negative) dependence is non linear, hence we consider the log-ratios instead (see the plots on the second row), which provides a higher dependence than without the log: We observe that the more volatility there is in year t, the lower are the ratios Ra,T(t). When the log-ratios are negative, it means that losses in year t + 1 have been overestimated with the measures calculated in year t. In the original scale, we can better quantify the magnitude of the effect: When looking at the case a = 99%, with respect to the overestimation, we can observe ratios below 0.5 for high volatilities (above twice the average of 12%, with MAD), i.e. the risk computed at the height of the crisis is more than twice the size of the risk measured a year later. On the other hand, we see that the underestimation can be very big since the realized ratios for the SQP can be even larger than 3; in other words, the risk next year is more than 3 times the risk measured during the current year for this sample. We also observe that the overestimation is very systematic for high volatility, whereas it is not for the underestimation at low volatility. Indeed, in this latter case, we see also values below 1, while we do not see any value above 1 when the volatility is high (above 20%, close to twice the average over the whole sample). Although we present here only the S&P 500, very similar behaviors are exhibited by all the other indices, see e.g. the Appendix A.1.

READ  Tracking with the SciFi subdetector in the LHCb upgrade 

Table of contents :

1 Introduction 
1.1 Evolution of the thesis
1.2 Structure and content of the thesis
2 Quantifying pro-cyclicality: An empirical study 
2.1 Introduction
2.2 Quantifying the predictive power of risk measures
2.2.1 Sample Quantile Process
2.2.2 Predictive power for the risk using the SQP
2.3 Pro-cyclicality results on real data
2.3.1 Realized volatility as an indicator of market states
2.3.2 SQP predictive power as a function of volatility
2.4 Pro-cyclicality results on different models
2.4.1 IID model
2.4.2 GARCH(1, 1) model
2.4.3 Other influences
2.5 Conclusion
3 Estimators in the IID case: Asymptotic theory 
3.1 Introduction
3.2 Sample quantile and r-th absolute centred sample moment
3.2.1 Auxiliary results
3.2.2 Proof of Theorem 3.1
Using Bahadur’s method
Using Taylor’s method
3.2.3 Extensions
3.3 Sample quantile and MedianAD
3.4 Location-scale quantile and measures of dispersion
3.4.1 Location-scale quantile and r-th absolute centred sample moment
3.4.2 Location-scale quantile and MedianAD
3.5 Discussion
3.6 Examples/ finite sample performance
3.6.1 The impact of the choice of the quantile estimator
3.6.2 The effect of sample size in estimation
3.7 Conclusion
4 Estimators in the case of augmented GARCH processes: Asymptotic theory 
4.1 Introduction
4.2 The bivariate FCLT
4.2.1 Proof of Theorem 4.3
4.3 Examples
4.4 Conclusion
5 Pro-cyclicality: Connecting empirics and theory 
5.1 Introduction
5.2 Pro-cyclicality in IID models
5.2.1 CLT’s between risk and dispersion measure estimators
5.2.2 Results on pro-cyclicality
5.3 Pro-cyclicality in augmented GARCH(p,q) models
5.3.1 FCLT’s between risk and dispersion measure estimators
5.3.2 Results on pro-cyclicality
5.4 Application
5.4.1 Comparing pro-cyclicality in IID models
5.4.2 Pro-cyclicality analysis on real data (reprise)
5.5 Conclusion
6 Conclusion 
6.1 Summary
6.2 Further research perspectives
Bibliography

GET THE COMPLETE PROJECT

Related Posts