NumPy, Pandas and Matplotlib basics

Img src: https://www.pexels.com/photo/business-charts-commerce-computer-265087/

So you are new to Python. Or perhaps you are already familiar with these libraries, but wanted to get a quick refresher. Whatever the case may be, Python has become without a doubt one of the most popular programming languages today, as shown by the following graph from Stack Overflow Trends:


Okay, so you have come here because you want to learn Git/Github. So instead of writing a 10 lines paragraph about it, let’s get to the point. I will keep updating this article in the future to make it as comprehensive but concise as possible.


We have come pretty far into our analysis of univariate time series. So far, we have considered some sort of time-based stochastic process X_{t}, dependent on current and previous noise and itself. However, what happens when in addition, we would like to model our endogenous process as also being dependent on external factors, or exogenous variables? First, recall that the classical decomposition model states that


In the last article, we saw one important useful extension to the ARMA models: the Autoregressive Integrated Moving Average or ARIMA models, which formalize and integrate the differencing factor into the model. This time, we will see yet another very useful extension: seasonal component with the SARIMA models. But before we jump into the main topic, let’s recall the equation formulation of the ARMA(p,q) models in summation and operator forms.

Autoregressive and Moving Average Operators


In the last section, we discussed model selection for ARMA(p,q) models by using the AIC, AICc, BIC, which are metric functions based on the likelihood and the parameters, providing a certain measure that can be used to compare models against each other on the same data. In this article, we will now recatch the ideas of differencing and seasonality that we previously studied, and see how these can be integrated into the ARMA model. Let’s start by reviewing some essential concepts from the Differencing section

Differencing


AIC, AICc, and BIC metrics

In the last section, we learned about Gaussian Time Series, a powerful and flexible assumption when it comes to ARMA(p,q) parameters estimation. In this article, we will see how we can select the “best” model among a number of fitted models.

Maximum Likelihood Model Selection

In the previous section, we saw how Gaussian assumptions allow us to obtain and maximize the likelihood of some ARMA(p,q) process to obtain parameter estimates for the thetas and alphas. That is, for any model, we are looking to find parameters such that


The likelihood for a Gaussian Time Series

In the last two articles, we saw a number of methods to independently estimate AR(p) and MA(q) coefficients, namely the Yule-Walker method, Burg’s Algorithm, and the Innovations Algorithm, as well as the Hannan-Risennan Algorithm, to jointly estimate ARMA(p,q) coefficients, by making use of initialized AR(p) and MA(q) coefficients with the previous algorithms. We also mentioned, that these methods, as sophisticated as they are, tend to perform quite poorly when it comes to dealing with real datasets, as it’s easy to misspecify the true model. Therefore, we would like to yer introduce another assumption: normality of the observations. …


The ARMA(p,q) model implies that X_{t} can be expressed in the form above.

In the last article, we learned about two algorithms to estimate the AR(p) process coefficients: the Yale-Walker equations method, and Burg’s algorithm. In this article, we will now see a very simple way to determine the MA(q) process coefficients, and a first approach to estimate the ARMA(p,q), jointly. Let’s see how this works:

Estimation of MA(q) (Innovations)

As you may guess by the title, the way to estimate the MA(q) coefficients is… the Innovations Algorithm we saw before. Recall that the MA(q) process can be written as


Burg’s Algorithm Estimation Formulas

In the last article, we discussed the extension of the Innovations algorithm for the more general ARMA(p,q) process, which allowed us to make predictionsf for arbitrary number of timesteps in the future. However, we still haven’t seen how to estimate the actual ARMA(p,q) model coefficients. In this article, we will see two algorithms for estimating AR(p) coefficients, and in the next article, we will see how to estimate MA(q) and start taking a look into jointly estimating ARMA(p,q) coefficients. Let’s jump right into it!

Estimation of AR(p) :: Yale-Walker

In real world problems, the ACVF is the easiest thing to estimate using the sample data…


The recursive forecasting form of the Innovations algorithm3.

We have come a long way from first exploring the idea of models with way too little or too much dependence, to the structured ARMA(p,q) models that aim to balance this by taking into account not only dependence between observations, but between their random noise at different timesteps. In the “Prediction II: Forecasting” section, we studied the best linear predictor along with two algorithms to help us find the BLP coefficients and make predictions: the Durbin-Levinson algorithm and the Innovations algorithm. In this article, we will see how to extend these ideas to produce predictions for ARMA(p,q) models. Before starting…

Hair Parra

Data Scientist & Data Engineer at Cisco, Canada. McGill University CS, Stats & Linguistics graduate. Polyglot.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store