State-Space Model

#econometrics #economics #bayesian #SSM

Oh, Hyunzi. (email: wisdom302@naver.com)
Korea University, Graduate School of Economics.


Main References

  • Kang, Kyuho. (2021). "베이지안 계량경제학, 제2판". 박영사.
  • Kim, Seung Hyun. (2024). "Asset Pricing and Term Structure Models". WORK IN PROGRESS.
  • Peng, R. D. (2022). "Advanced Statistical Computing". Published with bookdown. https://bookdown.org/rdpeng/advstatcomp/ (visited on 5th May, 2024)

SS Model

Generalized SSM

  • Measurement equation: relationship between the observed and unobserved continuous state variable .
  • Transition equation: dynamics of the state variable.

In matrix form,

  • Measurement equation:
  • Transition equation:

Unobserved Component (UC) Model

Assuming is log real GDP, we divide into AS shock () and AD shock (), where AS shock has permanent effect on and AD shock has only temporary effect.

  • Measurement equation:
  • Transition equation: where and .

In matrix form, we have

  • Measurement equation: where .
  • Transition equation: where and .

Dynamic Common Factor Model (DFM)

Assume that the multiple dependent variables and are determined by the dynamic common factor () and its idiosyncratic term ().

  • Measurement equation:
  • Transition equation: where .

In matrix form, we have

  • Measurement equation:
  • Transition equation:

Time-Varying Parameter (TVP) Model

Assume that the coefficient is time-varying, meaning that the effect of the independent variable on the dependent variable can be different across the time.

  • Measurement equation:
  • Transition equation: where and .

Kalman Filter

Below, we assume the follows:
given the SSM model note that

  1. both and are given as linear equations
  2. and follows i.i.d. Gaussian distributions.
  3. . if not, then the model becomes UC, meaning that

The basic idea of Kalman Filter (KF) is to derive the likelihood function, where , using for each period in .

Notations

  • : information set up to th sample period
  • .
  • .
  • .
  • .
  • .
  • .

Initialization

Stationary Case

The initialized value for the filter is chosen as follows: where and are the unconditional mean and variance of the initial factor , if follows stationary process.

derive :

Unconditional mean of the transition equation: letting , we have

derive :

Unconditional variance of the transition equation: since , we have using vectorization, since ,

Untationary Case

However, if follows unstationary process, then and is chosen arbitrary. For instance, we let and .

Prediction Step

Prediction of

By the assumption that and are i.i.d. Gaussian, we can claim Now suppose we have obtained and for . Then we have Therefore, we have and Note that since we have which leads to

Prediction of

From we have and Note that this result yields

Filtering Step

From the previous results, we have and Since we have where the covariance term is from

Remark (Conditional Distribution of Multivariate Gaussian).

Given the multivariate Gaussian distribution the conditional distribution can be derived as and similarly,

Using Remark 1 (Conditional Distribution of Multivariate Gaussian), from the multivariate Gaussian we have where and

Defining Kalman gain as we have

Intuitively understanding the kalman gain
  • : the covariance between and , representing how much is affected by the deviation of the .
  • : inverse of the variance of , representing the uncertainty in the information in , which reduce the affect of on .

Summing up

In result, since and are independent of and , is conditionally independent of and given . Furthermore, we have By induction, we can calculate the Kalman filtered values as follows:

Gaussian Quasi-log likelihood

Note that the log-likelihood can be decomposed as where is the density of given and .

Since where are independent and jointly Gaussian given as we have Thus for each , we have Therefore, The Gaussian QMLE of now found by maximizing Gaussian Quasi log likelihood with respect to .


Kalman Smoother

Kalman Smoother (KS) aims to obtain the best (mean squared error minimizing) estimate of the given the information up to time .

First, we assume that and are given from the last step of Kalman Filter. Now, we want to derive and . Additionally, it is assumed that is invertible (non-singular case).

Notations

  • .
  • .

Derive

From and we have Since we have Using the result of Remark 1 (Conditional Distribution of Multivariate Gaussian), we have where

Derive

Before deriving , notice that since the is determined by and .
Likewise, since the is determined by , , and .
Therefore, by induction we have the relationship of Now define the information set as Then Furthermore, notice that is independent of given and .

Now we have and

Summing up

In summary, the smoothed factors and the smoothed factor variance are given as

Estimation Methods

For the estimation of state-space model, we can exploit the either two methods: frequentist's method or bayesian method. Here, we introduce EM Algorithm for the prior method, and the estimation algorithm for later one.

Estimation using EM algorithm

For detailed derivation of EM algorithm, please refer to EM Algorithm > EM Algorithm for SSM. Using the Gaussian Quasi-log likelihood obtained in the Kalman filter, we can derive the estimates of the parameters for iteration steps as given iteration estimates. Note that we only look for the case when for every .

  1. Initial value: use Factor Models > Principle Component Analysis for the initial values of the parameters for , and let for the EM iteration.
  2. E-step: calculate the filtered value and smoothed value
    • filtered value (for ):
    • smoothed value (for ):
  3. M-step: derive the maximizer of the expectation
    • calculate
    • derive the maximizing parameters
  4. Convergence check: if it has converged, then stop the algorithm. if not, then repeat the EM steps
    • calculate the quasi log-likelihood:
    • check for the convergence: if , then stop the iteration.
  5. Derive the final Kalman process, and use the smoothed value as estimates.

Estimation using Bayesian Method

  1. Prior distribution: let the model and prior distribution as
  2. Derive posterior distribution of and using IW-IW update
    • where
    • where
  3. Derive posterior distribution of and using Noraml-normal update
    • where
    • where
  4. Derive using Kalman filter
    • filtering step (for ):
    • backward recursion (for ):
  5. Repeat step 2~4 for the simulation size.