Autoregressive Moving Average Models

Oh, Hyunzi. (email: wisdom302@naver.com)
Korea University, Graduate School of Economics.


Main References

  • Kim, Dukpa. (2022). "Time Series Econometrics" (2022 Fall) ECON 512, Department of Economics, Korea University.
  • Hamilton, J. D. (1994). "Time Series Analysis". Princeton University Press.

Please refer Difference Equations of Lag Operators if you have any difficulty in understanding the operations on lag polynomial.

ARMA Process and Stationarity

ARMA Process

Definition (ARMA process).

The process is said to be an process if, for every , where . Specifically, we say that is an process with mean if is an process.

Definition 11 (Lag polynomial).

We denote as a lag operator which is and we simply denote as the lag polynomial where

Using the lag polynomial, we can let and then rewrite the process as

Remark (AR and MA process).

Note that from process, we have

  • is referred to as , which is .
  • is referred to as , which is .
Remark (persistent process).

A process is called persistent if the present value depends a lot on its past value. For instance, in process, the process is called persistent if for we have .

Note that process is always stationary by Wold Decomposition > Theorem 7 (stationary linear process). Also, if is assumed, then makes the analysis easier as we have on the right hand side, meaning that or if the mean of is nonzero (this will be showed in the later part of this note). However, process is not always stationary, which of explanation is given below.

Stationarity of AR(1) Process

Consider an process of where .

As we have discussed in Wold Decomposition > Stationary Linear Process, our main interest is in which conditions we can rewrite as a linear process with a square summable coefficients.

Since we can recursively rewrite the process is not stationary if , and it violates the Stationary Stochastic Processes > Definition 7 (weak stationarity):

  • check the mean stationarity:
  • check the variance stationarity:
  • check the covariance stationarity:

If the process is , then we have and Thus by , we have where it is indeed stationary by Wold Decomposition > Theorem 8 (a.s. convergence of linear process).

Note that the condition of is equivalent to the requirement that the root of the equation is greater than unity since from we can derive the root as

Now consider the case when , and rewrite the process as which is now a forward process. Then similarly, we havewhich is also a stationary from, since . However, we will exclude this case since it specifies that the current value of the variable depends on the future innovative, which is not appealing for the economic data. Thus we often restrict the case when , and this condition is called as Causality.

Stationarity of ARMA Process

Now we look into more general conditions of stationarity for the model. As the process is always stationary for every , the stationary of general model defined as only depends on the stationarity of model. Thus, we can simply write and find the stationarity condition for .

Proposition (stationarity condition for AR process).

If is a stationary process with autocovariance function , and with , then the process is staionary with autocovariance function

Then, for we can directly imply Stationary Stochastic Processes > ^cea55f, which results in the stationarity of .

Furthermore, the autocovatiance function can be driven from Stationary Stochastic Processes > Definition 6 (autocovariance function), since which completes the proof.

Therefore, for the process of , we can simply check whether holds, for the stationarity.

Causality

Definition (causality of ARMA).

An process defined by the equation is said to be causal if there is a sequence of constants such that and

Unlike the previously stated theorem:
Theorem 7 (stationary linear process).

If and if , then the series converges in mean square and is stationary.


to satisfy the causality, the process must not depend on the future shocks. In general, a causal process is essentially what we denote as a stationary process.
Example (stationary AR(1) process).

Consider an process of where is a causal process such that and . Now assume , then is a causal process.

Proof.Note that we have and by the assumption that . Then by Proposition 4 (stationarity condition for AR process), is a stationary process.

Now we show that is also a causal. Remark that we have where . Then, since by Definition 5 (causality of ARMA), is not only stationary but also causal.

Also, as we have shown in Stationarity of AR(1) Process, the condition equals to that the root of is greater than unity. Incorporating the possibility of complex root, this is often called as no root inside the unit circle meaning that the length of the complex number is smaller than one.

Theorem (no root inside unit circle for AR process).

Let be an process for which the polynomials and have no common zeros. Then, is causal if and only if for all such that .

Proof.We only prove for the 'only if' part. Assume that for all such that . Then, has a power series expansion of on for some . This implies that so there exists a constant such that Therefore, we have and is a well defined causal process.

Invertibility

Definition (invertibility of MA process).

An process defined by the equation is said to be invertible if there is a sequence of constants such that and

Theorem (no root inside the unit circle for MA process).

Let be an process for which the polynomials and have no common zeros. Then is invertible if and only if for all such that .

For an invertible process, an approximation by an process is well defined. This fact has a practical importance since the estimation of processes are often easier than processes.

Example (invertible condition for MA(1) process).

Let the processes be and then only one of the two is invertible.

Proof.Calculating the acf using the first case , we have and thus for any , we have .

For the second case , we have and similarly for all , . However, the problem here rises since we have the identical form of autocovariance function for the two different models, and we cannot differentiate the two.

From the lag polynomial we have the root Also, for we have Thus by imposing the restriction , only one of the two satisfy the invertibility. For example, if , then the first case is invertible, then we have which leads us to by Difference Equations of Lag Operators > Proposition 3 (inverse of first order difference equation).