Asymptotic Results in Basic Linear Model
Oh, Hyunzi. (email: wisdom302@naver.com)
Korea University, Graduate School of Economics.
2024 Spring, instructed by prof. Kim, Dukpa.
Main References
The model remains the same as before:
In matrix notation,
For the rest of the section, we replace the original classic assumptions into much weaker ones, allowing for much more flexible application.
Unbiased | Gauss Markov Thm | Normality | Consistency | Asymptotic Normality |
---|---|---|---|---|
A1) | A1) |
A1) | A1*) |
A1**) |
A2) | A2) |
A2) | A2) | A2) |
A3) | A3) |
A3) | A3) | A3) |
A4&5) |
A4&5) | A4&5*) |
||
A-N) |
||||
A6) |
||||
A7) |
||||
A8) |
Here, in consistency, the assumptions are relaxed into:
while A1 is stronger version to imply the all A1, A4 and A5 (see ^b315f9Remark 7 (A1** implies A1, A4, A5, A6 and A8)).
Note that
For Assumption 6~8, they follow some form of Econometric Analysis/Asymptotics > ^fc3df4Econometric Analysis/Asymptotics > Theorem 1 (weak law of large numbers), in more general sense.
Note that from the definition
A sequence of estimators
Under A1*, A2, A3, A4&5*, A6, and A7, we have
Proof.From least-square estimator,
Under A1*, A2, A3, A4&5*, A6, A7, and A8, we have
Proof.From the definition of
Therefore, we have
There is no implication between unbiasedness and consistency. i.e. unbiasedness does not imply consistency and vice versa.
Note that the assumption A1** implies A1, A4, A5, A6 and A8, where
Proof.First, we show that A** is a strong enough to imply A1, A4, and A5:
Also, remark that A6 and A8 in ^6b0642Assumption 2 (ASM for consistency) are not needed since they are true by Econometric Analysis/Asymptotics > ^bd7e67Econometric Analysis/Asymptotics > Theorem 6 (Kolmogorov's Theorem).
First, A6 can be driven as
Let
Proof.Note that by A1**,
Also, note that
Thus, we have
Under A1**, A2, and A3, we have
Proof.From
Therefore, by Convergence of Random Variables > ^fd88fdConvergence of Random Variables > Theorem 25 (Slutsky theorem), we have
Since
Where
Under ^9b2df3Assumption 6 (ASM for asymptotic normality), the
Proof.Note that
Finally, by Convergence of Random Variables > ^fd88fdConvergence of Random Variables > Theorem 25 (Slutsky theorem), we have
Accordingly, the confidence interval is given on a standard normal distribution.
An approximate
Note that
Since we don't know the actual distribution of the finite sample when the true error does not follow the normal distribution, the actual size is not computable in many cases. Here, nominal size is computed from the standard normal in ^dbfb39Proposition 12 (asymptotic confidence interval), and if the sample size is big enough, the nominal size approximately same to the actual size.
Additionally, the difference between the actual and nominal size is called 'size distortion' in the test. This difference is usually calculated from Monte-Carlo simulation.
From Inferences in Linear Regression > ^12b079Inferences in Linear Regression > Remark 5 (decomposition of t-stat), we have
where
Under ^9b2df3Assumption 6 (ASM for asymptotic normality), the
Proof.Note that since
Thus we have
Therefore, we have
From the f-statistic