Classical Tests in MLE

#econometrics #economics

Oh, Hyunzi. (email: wisdom302@naver.com)
Korea University, Graduate School of Economics.
2024 Spring, instructed by prof. Kim, Dukpa.


Main References

  • Kim, Dukpa. (2024). "Econometric Analysis" (2024 Spring) ECON 518, Department of Economics, Korea University.
  • Davidson and MacKinnon. (2021). "Econometric Theory and Methods", Oxford University Press, New York.

Classical Tests

Restricted MLE

Suppose that the null hypothesis of interest is expressed as where is a vector function, and denotes the number of restrictions we are imposing. Note that we can alternatively express as where is a vector function with a vector argument , which is the number of the free parameters left unrestricted under restrictions.

For example, consider where and we are imposing the restriction of , which is then we have

For the rest of the section, we will follow the notations and the restricted MLE as

Here, we use Lagrangian to derive , by F.O.C. and by denoting : matrix, we have

Alternatively, we can derive using . Define and then solve which gives Later on, we will use the first derivative of as denoted as which is a matrix.

Wald Test

Proposition (wald test).

Using the obtained MLE , the Wald test is defined as then under , we have while under , .

First, let be some vector between and . Then by taylor's theorem(or mean value theorem), we have and given Maximum Likelihood Estimation > Theorem 7 (consistency of MLE), we have under the null hypothesis of , since and lies between the two of them.

Then, we have Since under , we have Therefore, by Normal Distribution Theory > Lemma 9 (multivariate normal and chi-squared distribution), we have which completes the proof.

Lagrange Multiplier Test

Proposition (lagrange multiplier test).

Given the lagrangian equations given by the Lagrange multiplier (LM; score) test is defined as which measures the magnitude of the vector , and under , and under , .

Note that under , i.e. , the lagrange multiplier should be close to zero given Maximum Likelihood Estimation > Theorem 7 (consistency of MLE), or the score function should be close to zero at . In other words, if the restriction is non-binding, then the value of the becomes zero.

Proof.Using taylor expansion to the lagrangian equation, there exists some between and , that satisfies Then the first equation can be written as and the second equation as where under the null hypothesis.

In matrix form, we have and thus Then using the inverse of partitioned matrix, we have where the comes from Maximum Likelihood Estimation > Theorem 8 (asymptotic normality of MLE). Note that stands for the terms that are asymptotically negligible (see Econometric Analysis/Asymptotics > Definition 18 (Op and op)).

Likelihood Ratio Test

Definition (likelihood ratio test).

The Likelihood Ratio (LR) test is defined as Under , we have while under , we have .

Likelihood ratio test check whether the under imposed restriction, the likelihood significantly decrease, meaning it is the wrong hypothesis.

Proof.By the mean value theorem, there exists some between and such that where by the definition of MLE. Thus we have From the proof of Maximum Likelihood Estimation > Theorem 8 (asymptotic normality of MLE), we have obtained Plugging this back in to our proof, we have Now similarly, obtain the taylor theorem where we have . Then, under , we have Remark that and Thus we have Then, from the previous result, Finally, we have where and Since we have by Maximum Likelihood Estimation > Theorem 8 (asymptotic normality of MLE), and such that , by Normal Distribution Theory > Lemma 9 (multivariate normal and chi-squared distribution), we have which completes the proof.

Applications on Linear Model

Note that for the case when unrestricted MLE, it has been shown in Maximum Likelihood Estimation > Example 10 (MLE of normal distribution) as and by maximizing the log-likelihood function of

Restricted MLE for Linear Model

Consider a set of linear restrictions given by where . Then the restriction MLE can be obtained by Lagrangian equation, Note that F.O.C. Thus we have for the Also, by multiplying on the both sides of the , we get where the second equation holds by the . Then, we have Finally, since then

Additionally, note that the log-likelihood function is given

Three Tests in Linear Model

Given the solution of and , and by letting of matrix, the Walds, LM, and LR test are given as follows:

Wald test in Linear Model

Since we have

LM test in Linear Model

Since and we have

LR test in Linear Model

where the inequality holds since

Note that we have

Remark (LM, LR, Wold test).

We have since by letting , we have