Book 1. Market Risk
FRM Part 2
MR 6. Validating BHCs VaR Models

Presented by: Sudhanshu
Module 1. VaR Model Validation
Module 1. VaR Model Validation
Topic 1. VaR Model Validation
Topic 2. Conceptual Soundness of VaR Models
Topic 3. Sensitivity Analysis for VaR
Topic 4. Confidence Intervals for VaR: Key Challenges
Topic 4. Benchmarking VaR Models
Topic 1. VaR Model Validation
-
The Basel Accords of 1996 established VaR as a risk metric for determining regulatory capital for banks.
-
VaR measures the downside risk of a portfolio. It has the following three components: (1) Loss size (calculated by the model), (2) Specified probability c (of a loss greater than or equal to the loss size), (3) Time frame for measurement
-
Under Basel framework,
-
The probability of loss was set at c = 1%, and a daily time frame was set.
-
Exceedances (or exceptions) are defined as actual losses that exceed the VaR threshold.
-
A bank’s regulatory capital is adjusted higher for those banks with a higher number of exceedances.
-
-
Traditional VaR Limitation: Historical data ignores dynamic volatility in portfolio returns
- Solution: GARCH VaR models capture recent volatility changes
- Univariate approaches for VaR estimation assume a distribution for portfolio returns and use the lowest c% cutoff to calculate VaR.
-
In contrast, multivariate approaches use a separate distribution for portfolio components and require volatility estimates for the distribution of returns of each component as well as the correlations between them.
-
As the number of portfolio positions increases, a multivariate VaR estimation model can quickly become difficult to manage.
-
Topic 2. Conceptual Soundness of VaR Models
-
VaR Model Validation Requirements
- Assess methodology soundness
- Evaluate input data quality
- Verify the validity of model assumptions
- Ensure alignment with bank's specific risk management objectives
-
Historical VaR Limitations
- Studies show historical VaR models underperform basic GARCH VaR models using historical P&L
- Questions utility of historical VaR estimates
-
Input Data Issues: Pseudo Historical Returns
- Historical portfolio returns ignore dynamic portfolio composition changes
- Solution: Use pseudo historical returns (recalculated based on current portfolio composition)
- Critical when portfolio composition changes significantly over time
- Example: Derivatives overlay reduces risk → historical returns show no VaR change, pseudo returns show lower VaR
- Challenge: Large trading desks may have missing data (insufficient history/unavailable valuations)
-
Conceptual Soundness Tests
- Model must reflect risk changes from portfolio position changes (critical test)
- Quantitative tests: Sensitivity analysis and confidence intervals
Practice Questions: Q1
Q1. Which of the following items is crucial when evaluating the conceptual soundness of a VaR model?
A. The portfolio return distribution should be normal.
B. Inputs need to be adjusted to determine the change in VaR.
C. The model should use actual historical returns to calculate VaR.
D. The model should be designed to meet specific risk management objectives.
Practice Questions: Q1 Answer
Explanation: D is correct.
Conceptually sound models are designed to meet the specific risk management objectives of the bank. Understanding the intended use of the VaR model (e.g., regulatory capital calculation, internal risk assessment) is essential for evaluating its appropriateness. Portfolio return distributions tend to be nonnormal, so the VaR model should not assume a normal distribution. Adjusting key inputs is part of sensitivity analysis and is not used to evaluate the conceptual soundness of the model. Historical returns would not be a good input for calculating VaR when the portfolio composition is dynamic.
Topic 3. Sensitivity Analysis for VaR
-
Sensitivity analysis is a quantitative test used to examine the validity of VaR estimates and the conceptual soundness of a VaR model.
-
Performing sensitivity analysis involves the following steps:
-
Step 1: Identify Key Inputs
- Determine key inputs/assumptions (e.g., portfolio positions)
- Validate assumptions and identify simplifications in pseudo history generation
- Note: Omissions/simplifications can materially affect VaR output
- Regulators increasingly require tracking data proxies and monitoring excluded risks
-
Step 2: Adjust Inputs & Recalculate VaR
- Change one input at a time (e.g., adjust position weight by certain percentage)
- Recalculate VaR threshold
- Marginal VaR: Change in portfolio value from small position weight change
- Estimated as slope coefficient in regression of Δ portfolio value vs. Δ component value
-
Step 3: Analyze Results
- Compare new VaR estimates to original
- Evaluate sensitivity to each input change
- VaR sensitivity depends on: (1) change in position's portfolio share, (2) position value sensitivity to portfolio value
-
Topic 3. Sensitivity Analysis for VaR
-
Handling Data Omissions
- Replace scarce data with proxies (but volatility/correlation may differ)
- Reverse engineering: Estimate component value change conditional on portfolio value change using historic relationships (improves with larger datasets)
- Test assumption that omitted positions/risk factors are immaterial
- Prioritize model updates for important risk factors
-
Benefits
- Risk assessment: Identifies most influential factors for better risk management
- Model validation: Tests robustness against assumption changes, enhances credibility
- Regulatory compliance: Demonstrates understanding of model limitations/risks
- Improved decision-making: Informs position adjustments and risk mitigation strategies
Practice Questions: Q2
Q2. Which of the following actions is least likely a benefit of sensitivity analysis?
A. Model validation.
B. Regulatory compliance.
C. Improved decision-making.
D. Reducing regulatory capital.
Practice Questions: Q2 Answer
Explanation: D is correct.
The benefits of sensitivity analysis include model validation, risk assessment, regulatory compliance, and improved decision-making. Regulatory capital depends on the level of VaR, which in turn depends on the risk exposures of the portfolio and not on sensitivity analysis, which validates the VaR model.
Topic 4. Confidence Intervals for VaR: Key Challenges
-
Data Quality and Availability
- Data quality: Incomplete/erroneous historical data leads to unreliable estimates
- Nonnormal returns: Fat tails and skewness complicate standard statistical methods
-
Model Assumptions
- Distribution choice: Incorrect distribution selection causes significant estimation errors
-
Alternative Approaches
- Use variance of pseudo historical returns (vs. single quantile point)
- Nonparametric methods: order statistics (ordered returns) or bootstrap (sampling with replacement)
-
Historical Findings
- Confidence intervals are asymmetric and tighten with more data
- GARCH VaR produces tighter intervals than historical simulation VaR
- Stress periods generate wider intervals (less precision) for historical simulation
Practice Questions: Q3
Q3. Which of the following findings is incorrect regarding the empirical analysis of VaR confidence intervals?
A. Confidence intervals are not symmetric.
B. Larger datasets lead to tighter confidence intervals.
C. Order statistics produces tighter confidence intervals for VaR compared to bootstrap techniques.
D. GARCH VaR tends to produce much tighter confidence intervals compared to historical simulation VaR.
Practice Questions: Q3 Answer
Explanation: C is correct.
Confidence intervals are not symmetric. Using more data leads to tighter
confidence intervals. GARCH VaR also tends to produce tighter confidence
intervals compared to historical simulation VaR. Regarding order statistics and bootstap techniques, neither approach produces a tighter confidence interval.
Topic 5. Benchmarking VaR Models
-
Current practice and limitations
- Benchmarking compares bank's VaR model to another model (typically during transition periods)
- Banks rarely have formal benchmarking mechanisms (resource-intensive to build models)
- Common approach: plot two models (reveals limited information beyond conservativeness)
-
Statistical testing Issues
- Errors not independently distributed due to frequent portfolio changes
- Most banks lack two models to compare
- Alternative: Backtest positional VaR against actual P&L outcomes (or P&L GARCH VaR)
-
Empirical Findings: positional VaR Vs P&L GARCH VaR
- Positional VaR more conservative (exceedances linked to regulatory capital)
- Positional VaR rarely beats P&L GARCH VaR accuracy due to conservativeness
Practice Questions: Q4
Q4. Which of the following statements regarding benchmarking VaR models is most accurate?
A. Benchmarking VaR models is not used for validating their performance.
B. Banks routinely benchmark their VaR model against several competing models.
C. In the statistical backtesting of VaR models, the errors are independently, but not identically, distributed.
D. Benchmarking is usually conducted for only a short time period during a bank’s transition to a new model.
Practice Questions: Q4 Answer
Explanation: D is correct.
Benchmarking is usually done for a short time period when the bank is planning on transitioning to a new model. Benchmarking VaR models is crucial for validating their performance and ensuring that they provide accurate risk assessments. Because trading portfolios change frequently, the errors in formal statistical backtesting used to conduct benchmark tests are not independently and identically distributed, especially for regression-based results.
In practice, banks rarely conduct benchmarking on an ongoing basis because of the time and resources needed to develop another VaR model to benchmark against.
Copy of MR 6. Validating BHCs VaR Models
By Prateek Yadav
Copy of MR 6. Validating BHCs VaR Models
- 53