Click to Download Workbook: Single Asset VaR and CVaR
Value at Risk refers to the dollar (or percentage) amount that can be lost in a given day. This is a widely used technique in order to ascertain risk with a given asset. There are multiple methods one can use in order to calculate Value at Risk. This workbook dynamically calculates Historical Value at Risk and Conditional Value at Risk for a single asset different time frames.
The first and most popular being historical VaR. A historical simulation is a nonparametric approach that is widely regarded as. It is implied in this approach we use historical data. There are inherent flaws in the historical approach, however. We take the prices of a given stock or portfolio of stocks over a given time frame, then calculate daily (or weekly or monthly) returns for the asset. We can take these returns and generate a normal distribution of historical returns. We can do this by using the histogram application within the Analysis ToolPak in Excel.
Our input range will be our returns. We could also use dollar amounts as well. In either scenario, we need to sort lowest to highest in order to determine our Value at Risk. Don’t worry about the bin range, for now, just know that you can create and refer to a range of bins you set yourself in this application. This means that the returns will be distributed across the user-defined bins in that case. In this case, I selected cumulative percentage since we are referring to returns. Our output will look something like this:
We can now generate a graph representing this histogram output. We can use a line chart if desired, but the bar chart (clustered column) is typically standard. The left side represents losses while the right side represents gains. We can examine this distribution and set a confidence level in order to calculate Value at Risk. The industry norm is typically 95%. We can also calculate the 99% and 99.99% Value at Risk also as popular alternatives. What we are saying is that we have 95% confidence that our losses will not be worse than this level. We can represent this graphically and with the percentile function within Excel also. The arguments for the percentile function will be your range of returns or dollar amounts and the given confidence level, in this case, 5%.
The red line and outputted number represent our 95% confidence Value at Risk for Apple over the given time frame. Again, we could use different confidence intervals to obtain different answers. There are inherent flaws with the historical technique used here. The first being that this is merely a range of potential losses. Historical VaR ignores all returns worse than the given confidence level. In this case, we may get a skewed approximation of risk. With this being said, we can also examine the distribution and conclude this distribution isn’t entirely normal. One major assumption of the approach is that the calculation is under the subject of normality. The issue here is that historical price data and or historical returns tend to not be quite messy. In order to illustrate the flaws in the Historical VaR approach, you can use the descriptive statistics application within the Analysis ToolPak. Note the Skewness and Kurtosis numbers outputted. These numbers will essentially let you know if your distribution is normal or not. Skewness is a measure of symmetry within a data range, while kurtosis measures if the data is heavy-tailed or light-tailed, referring to the peak of the distribution. You should be looking for skewness around 0. The further away from 0, the less symmetric the distribution will be. Kurtosis should be somewhere around 3 if we are looking for normality within a distribution. Being further away from this number could indicate a heavy-tailed or light-tailed distribution.
In this workbook, we calculate VaR not with the percentile formula, but rather with the NORMINV formula given a mean and standard deviation from the descriptive statistics output. This is another method to calculate Historical VaR.
Notice the output is different than our prior calculation with the PERCENTILE function. The reason being is that the two formulas would only match if the data set was perfectly normal. NORMINV will find the 5th percentile of a perfectly normal distribution with a given mean and standard deviation, whereas PERCENTILE will give the 5th percentile of the data set regardless of normality. The NORMINV was used in this model to illustrate the flaws of the traditional VaR approach compared to the CvaR approach.
Conditional Value at Risk (CVaR) or Expected Shortfall can be a more accurate measure of risk. CVaR is a parametric model due to the fact that a given standard deviation and mean is used in relation to a small subset of data. The entire data range is not used in a CVaR calculation. Conditional Value at Risk will give us an average expected loss and take into account more contingencies. In order to calculate Conditional Value at Risk, we need to obtain the number of points in our data set we will be referring to. The reason being is that we need to find the point corresponding with our CVaR confidence value of 95%. We could have also done this for Historical VaR. All we need to do is subtract the confidence interval from 1 and multiply it by the number of observations in our data set. From here, we can compute the 95% CVaR value by taking 1 and dividing it by the number corresponding to our CVaR confidence level. We will multiply this number by the sum of our returns up to the corresponding number we obtained prior. This number will fluctuate with the selection of different time frames. Here is a picture to better explain.
We get an account of all returns in this case, which makes it a better representation in some ways. Not the difference in the output from the CVaR and traditional approach. In the historical approach, we actually got an underestimation compared to the CVaR approach.
Now that you understand the methodology, we can automatically calculate Historical VaR within Excel. The workbook is linked at the top of the post. Simply input a ticker symbol, select a confidence interval, and a time frame. You could also set the number of observations as an alternative to selecting a time frame.
The output consists of several Historical VaR calculations with popular confidence intervals. CVaR is also calculated and displayed with different confidence intervals also. The two approaches are compared in regards to the user inputted confidence level also. In short, CVaR is typically the more accurate approximation of Value at Risk. The third and final approach to VaR, the Monte Carlo approach, will be discussed in a separate post.