• idriss tsafack

GARCH models with R programming : a practical example with TESLA stock

Hey there! Hope you are doing great! In this post I will show how to use GARCH models with R programming.

In my previous blog post titled "ARMA models with R: the ultimate practical guide with Bitcoin data " I talked about ARMA models and how to estimate those models. One of the assumptions of the ARMA model is that the error term are either strongly or weakly stationary.

The problem is that in real life. This assumption is not always satisfied. Indeed, when looking financial data such as stock market data (AAPL, TESLA, GOOGL) or currency data (EUR/USD, GBP/USD), even indices data ( S&P 500, DAX 30, US30, NASDAQ 100 etc.) and cryptocurrency. Usually these data display an error term which presents a sort of stochastic variation of their volatility over time, meaning that considering the stationarity assumption will lead to a misspecification of the model estimation and therefore will lead to a bad forecast.

GARCH models are usually the one considering such heteroskedasticity of the error terms and the stochastic change of their related volatility. In this post I will describe a simplified version of the GARCH model, also I will show how to estimate such model setting, how to interpret or read the results and how to find the optimal setting.

GARCH Model Setting

GARCH stands for Generalized Autoregressive Conditional Heteroskedasticity Models. GARCH models are commonly used to estimate the volatility of returns for stocks, currencies, indices cryptocurrencies. Professional traders use this tool to price assets and detect which asset will potentially provide the best return in their portfolio. Also they can use this tool to adjust their portfolio allocation and risk management.

There exist a large variety of GARCH model : Standard GARCH (SGARCH), Nonlinear GARCH (NGARCH), Nonlinear Asymetric GARCH (NAGARCH), Integrated GARCH (IGARCH), Exponential GARCH (EGARCH), GARCH in Mean (GARCH-M), Quadratic GARCH (QGARCH), Glosten-Jagannathan-Runke GARCH (GJR-GARCH), Treshold GARCH (TGARCH), Family GARCH (FGARCH), Continuous-time GARCH (COGARCH), Zero drift GARCH (ZDGARCH) etc. I will present only two of these variants : the standard GARCH and the GJR-GARCH models.

The standard GARCH Model

To model the GARCH model, we need to know first how the ARCH model is set. So let us consider the error term e[t] or the residual from the demeaned return. Then the error term is decomposed into two main terms that are the stochastic term z[t] and the time-dependent standard deviation s[t] such that :

R[t] = mu + e[t]

e[t] = s[t]*z[t].

R[t] is the variable representing the time series of the return of the stock considered, mu is the mean and e[t] is the error term. The variable z[t] is assumed to be a strong white noise process. If we consider that q is the number of lags for the volatility modelling (ARCH(q)), then, we have

Therefore, an ARCH(q) model means that the time-dependent volatility depends on the first q lag squared values of the error term.

Then, based on the ARCH(q) model, we can define the model setting of the GARCH. Indeed, the GARCH model is considered when the error variance s[t] is assumed to follow an ARMA process. In that situation, the GARCH(p,q) model with p the number of lags of the s[t] terms and q the number of lags for the ARCH terms e[t]^2.

Therefore, the main difference between the GARCH model and the ARCH model is that the GARCH model consider also the volatility of the previous period, while the ARCH model do not. This is truly important as in the financial market we can usually observe mean reverting patterns of the instruments/variables and this mean-reverting pattern can in some case could happen by respecting a certain average range, meaning that the volatility of the previous periods should be considered.

Then, a GARCH(1,1) is such that

and the ARCH(1) model is nothing else than the GARCH(0,1) model.

The particularity of the standard GARCH model is that we consider that the conditional error term follows a normal distribution. This is not always the case for all types of data. We usually observe in the financial data more skewed data. Therefore, we should also consider checking if the residuals follow that pattern. The GARCH model with skewed student t-distribution (STTD) is usually considered as an alternative to the normal distribution in order to check if we have a better model fitting.

The GJR-GARCH model

This variant of the GARCH model has been developed in 1993 by Glosten, Jagannathan and Runkle. The main idea of the model is to consider a quadratic GARCH model in such a way that Z[t] is i.i.d and the only negative impacts of the error terms is considered in the sample.

Then, the volatility is modeled as follows

Also in this case we can also consider either the normal or the student distribution for the error term. For more details about the algebra related to this model, check the book by Hamilton (1985) or the one by Gourieroux and Jasiak (2001) titled "Financial Econometrics : Problems, Models and Methods".

Model Estimation

The estimation of the GARCH model is very simple. Indeed considering a GARCH(p,q) model, we have 4 steps :

  1. Estimate the AR(q) model for the returns. and get the residuals e[t]

  2. Construct the time series of the squared residuals, e[t]^2.

  3. Compute and plot the autocorrelation of the squared rediduals e[t]^2.

  4. Estimate the ARMA (p,q) model for the volatility s[t] of the residuals based on one of the specified model.

Real data Applications

In this section we will run an application of the GARCH model. For this purpose, we will use the TESLA stock.

The first step of this operation would be to load the important packages related to the topic, that are : "quantmod" for financial data scraping, "rugarch" for GARCH model specification and estimation, "xts" for time series manipulation and "PerformanceAnalytics" to analyze the performance of our models setting. Here is the related code.

Once we have loaded the different packages, we can get the symbol of TESLA, called "TSLA" and check the first 6 rows of the data frame. To get the TSLA data, we use the function getSymbols(). Here is the code.

head(df) is used to display to preview the first six rows of the data frame. We have scraped the data from january 2010 to december 2020. The dataframe displays for each day the open price, the high, low, close, volume and adjusted close price of the stock.

Now we can display the timeseries of TSLA stock and its related volume. For this we use the code chartSeries(TSLA) (see the previous displayed code).

On the figure, the first subplot displays the price Open, High, Low and Close all condensed as a candle or boxplot for each day, while the other subplot presents the volume. We can also display the graph for only one month in order to see well what is happening for each day. Here is the code

We obtain the following graph for the month of december 2020

Each candle represents a summary statistics of price for a day. The red color means that we had a negative return or a decrease of the price, while a green color of the candle means that the price has increase over the considered day.

The next step is the calculation of the daily return of the price and display it. For the return calculation we use the function CalculateReturns(). Here is the related code.

We use again the function charSeries() in order to display the time series of the returns. Here is the graph of the returns.

As we can see there the time series of returns is zero mean and the returns displays for some random day very high volatility, meaning that the standard stationarity won't work here.

Now we can display the histogram of returns and try to see if the normal distribution could be used for the conditional error term.

As we can see, the histogram of the of the returns seems to be more skewed than the normal distribution, meaning that considering the normal distribution for the returns is not a good choice. The student distribution tends to be the more adapted for this distribution. We will see if that is confirmed by the model estimation.

The next step is to calculate the annualized volatility and the rolling-window volatility of returns. This can be done either at the daily, monthly, quarterly frequency, etc. Here is the code for the monthly. width = 22 (252 for yearly frequency)

Here is the related graph

Based on this graph, we can still see that there are months with very high volatility and months with very low volatility, suggesting the stochastic model for conditional volatility.

Now we can run the GARCH model. We can start with the standard GARCH model where we consider the conditional error term is a normal distribution. We use the function ugarchspec() for the model specification and ugarchfit() for the model fitting. For the standard GARCH model, we specify a constant to mean ARMA model, which means that arma0rder = c(0,0). We consider the GARCH(1,1) model and the distribution of the conditional error term is the normal distribution.

Here are the results of the estimation of the standard GARCH(1,1) model.

The next part of the results

The first table of the first part of the estimation (see table named "Optimal parameters") shows the optimal estimated parameters. This table shows the significance of the estimated parameter.

It shows that the constant parameter omega1 (parameter w1 in the model setting) tends to be non significant, meaning that the constant parameter seems to be not useful in this model setting.

The second table presents the information criteria (see table named "Information criteria"). It displays the Akaike (AIC), Bayes (BIC), Hannan-Quinn and Shibata criteria for the model estimation. The lower these values, the better the model is in terms of fitting.

The next table presents the Ljung-Box test for testing the serial correlation of the error terms. The null hypothesis is that there is no serial correlation of the error terms. The decision rule is simple. Basically, if the p-value is lower than 5%, the null hypothesis is rejected. As we can see that the p-value is higher than 5%, meaning that there is not enough evidence to reject the null hypothesis. Then there is no serial correlation of the error term.

Another table that is interesting to check is the last table (see table named "Adjusted Pearson Goodness of Fit"), concerning the goodness of fit of the error. Indeed, it is useful to check if the error term follows the normal distribution. The null hypothesis is that the conditional error term follows a normal distribution. If the p-value is lower than 5%, the null hypothesis is rejected. As we can see, the normal distribution is by far rejected (as the p-value is close to zero).

Here are the results of the other plots showing the performance of the model similar to the one presented in the table results. At the bottom left we can see the QQ-plot (see graph at the intersection of the third row and first column) and it show that the residuals are not that perfectly aligned with the straight line, meaning that the residuals do not follow the normal distribution. This result can also be confirmed by the plot of the residuals kernel and the normal distribution (see the graph at the intersection of the second row and fourth column).

At the bottom right we can observe the news based impact of the volatility. News based impact is positive either the news is positive or negative.

Furthermore, we have the graph of the autocorrelation function of the residuals (first row and fourth column and other related plots. I also invite you to check all those plots for your analysis.

The GARCH model with Skewed student distribution

As we saw in the previous estimations, the residuals do not fit the normal distribution usually in real data. Now we consider that the residuals could be more skewed and assume that they follow a student distribution. Here we change only in the model specification the fact that the error distribution follow a student distribution. distribution.model = "sstd".

Here is the code

And we obtain the results

and the part (2) of the results

Now, we can see that on the last table, the p-values are higher than 0.05, meaning that the skewed student distribution is a good fit for the error term. Also, the AIC, BIC and Hannan-Quin value are lower than the one obtained from the previous setting (normal distribution case).

We obtain also the following additional plots.

Now, we can see that the QQ-plot shows a more aligned distribution to the straight line and the return distribution follow the student distribution. The kernel density of the returns tends to fit perfectly the student distribution.

The GJR-GARCH model estimation

The GJR-GARCH model estimation is estimated by changing in the model specification in the function ugarchspec(), the terms "sGARCH" by "gjrGARCH". Here is the code

Here are the results

Part (2)

This settting displays a non significant drift for the volatility and a non significant gamma for the gjr-GARCH. Here are the other plots

The Optimal GARCH model setting for the TESLA stock

After analyzing different models we observed that the GJR-GARCH(0,1) model or GJR-ARCH(1) model seems to work well for TESLA stock.

Here is the code and the results.

Based on this model setting, we can see that all the parameters of the model are statistically significant. Indeed, their p-value is lower than 5% (see table named "optimal parameters" here below).

Also, the Akaike (AIC), Bayes (BIC), Hannan-Quinn and Shibata criteria are lower than the one observed from the other model setting (see table named "Information criteria").

When testing the presence of serial correlation in the residuals, we can see that the p-value is greater than 5% for the different setting considered, meaning that there is no serial correlation in the residuals (see table "weighted Ljung-Box test on standardized residuals).

Furthermore, the global test of the ARCH model shows that the ARCH model is globally significant as its global p-value is close to zero (see table named "Weighted ARCH LM Tests").

For the goodness of fit of the residual to the considered skewed student distribution, we can see that the p-value is greater than 5%, meaning that there is not enough evidence to reject the fact that the residuals fit well that distribution (see table named "Adjusted Pearson Goodness-of-Fit Test ").

To see also those statistics we can also rely on the different plots here below. For example, to check the goodness of fit of the residuals, we can also observe the QQ-Plot (see the graph located at the intersection of the third row and first column) or the plot of the residuals distributions (see the graph located at the intersection of the second row and fourth column).

When we run the forecast of the volatility for the next 20 days, we can observe that based on this model, we expect the volatility of TESLA to potentially increase on the next 5 days and remain at the same level for the remaining days as shows the graph below.

There are still a lot to learn about GARCH models. You can check the books by check the book by Hamilton (1985) or the one by Gourieroux and Jasiak (2001) titled "Financial Econometrics : Problems, Models and Methods".

References :

1. Bollerslev, T., Chou, R. Y., and Kroner, K. F. (1992) ARCH modelling in finance. Journal of Econometrics, 52, 5–59.

2. Bollerslev, T., Engle, R. F., and Nelson, D. B. (1994) ARCH models, In Handbook of Econometrics, Vol IV, Engle, R.F., and McFadden, D.L., Elsevier, Amsterdam

3. Gourieroux, C. and Jasiak, J. (2001) Financial Econometrics, Princeton University Press, Princeton, NJ.

4. Hamilton, J. D. (1994) Time Series Analysis, Princeton University Press, Princeton, NJ.

5. Heston, S., and Nandi, S. (2000) A closed form GARCH option pricing model. The Review of Financial Studies, 13, 585–625.

6. Rossi, P. E. (1996) Modelling Stock Market Volatility, Academic Press, San Diego.

7. Tsay, R. S. (2005) Analysis of Financial Time Series, 2nd ed., Wiley, New York.

So that's all for this post. There are many other features that we can learn, but we prefer to prepare it for another post. I hope you liked it. If you liked the post, please share it with friends and your community of machine learning and data science.

See you on the next post.

8,027 views0 comments