In the last article, we saw one important useful extension to the ARMA models: the Autoregressive Integrated Moving Average or ARIMA models, which formalize and integrate the differencing factor into the model. This time, we will see yet another very useful extension: seasonal component with the SARIMA models. But before we jump into the main topic, let’s recall the equation formulation of the ARMA(p,q) models in summation and operator.
In the last section, we discussed model selection for ARMA(p,q) models by using the AIC, AICc, BIC, which are metric functions based on the likelihood and the parameters, providing a certain measure that can be used to compare models against each other on the same data. In this article, we will now recatch the ideas of differencing and seasonality that we previously studied, and see how these can be integrated into the ARMA model. Let’s start by reviewing some essential concepts from the Differencing section
In the last section, we learned about Gaussian Time Series, a powerful and flexible assumption when it comes to ARMA(p,q) parameters estimation. In this article, we will see how we can select the “best” model among a number of fitted models.
In the previous section, we saw how Gaussian assumptions allow us to obtain and maximize the likelihood of some ARMA(p,q) process to obtain parameter estimates for the thetas and alphas. That is, for any model, we are looking to find parameters such that
In the last two articles, we saw a number of methods to independently estimate AR(p) and MA(q) coefficients, namely the Yule-Walker method, Burg’s Algorithm, and the Innovations Algorithm, as well as the Hannan-Risennan Algorithm, to jointly estimate ARMA(p,q) coefficients, by making use of initialized AR(p) and MA(q) coefficients with the previous algorithms. We also mentioned, that these methods, as sophisticated as they are, tend to perform quite poorly when it comes to dealing with real datasets, as it’s easy to misspecify the true model. Therefore, we would like to yer introduce another assumption: normality of the observations. …
In the last article, we learned about two algorithms to estimate the AR(p) process coefficients: the Yale-Walker equations method, and Burg’s algorithm. In this article, we will now see a very simple way to determine the MA(q) process coefficients, and a first approach to estimate the ARMA(p,q), jointly. Let’s see how this works:
As you may guess by the title, the way to estimate the MA(q) coefficients is… the Innovations Algorithm we saw before. Recall that the MA(q) process can be written as
In the last article, we discussed the extension of the Innovations algorithm for the more general ARMA(p,q) process, which allowed us to make predictionsf for arbitrary number of timesteps in the future. However, we still haven’t seen how to estimate the actual ARMA(p,q) model coefficients. In this article, we will see two algorithms for estimating AR(p) coefficients, and in the next article, we will see how to estimate MA(q) and start taking a look into jointly estimating ARMA(p,q) coefficients. Let’s jump right into it!
In real world problems, the ACVF is the easiest thing to estimate using the sample data…
We have come a long way from first exploring the idea of models with way too little or too much dependence, to the structured ARMA(p,q) models that aim to balance this by taking into account not only dependence between observations, but between their random noise at different timesteps. In the “Prediction II: Forecasting” section, we studied the best linear predictor along with two algorithms to help us find the BLP coefficients and make predictions: the Durbin-Levinson algorithm and the Innovations algorithm. In this article, we will see how to extend these ideas to produce predictions for ARMA(p,q) models. Before starting…
In the last article, we discussed the stationarity, causality, and invertibility properties of ARMA(p,q) process, along with the conditions required to ensure these, and how to verify them. In this article, we will see how these properties, in particular, stationarity and causality greatly simplify our task of finding the ACVF, ACF, and PACF.
Recall from this article that a linear process is no more than a stationary time series which has the representation
In the last article, we saw that a general ARMA(p,q) process can be written, with the help of the autoregressive and moving-average operators as
Perhaps one of the most famous and best-studied approaches to working with time series, still widely used today is the ARMA(p,q) models and its derivatives. As you can guess, these essentially introduce a generalization of the AR(1) and MA(1) processes that we have previously seen. Before we start, let’s introduce some useful operators that will allow us to simplify our notation.
Data Scientist & Data Engineer at Cisco, Canada. McGill University CS, Stats & Linguistics graduate. Polyglot.