E(Yt) = m
Var(Yt) = g0 = s2
Cov(Yt,Ys) = g|t-s|, t ¹ s.
In other words, all the descriptive statistics about the time series: m, g0, g1, g2, ... are time invariant.
Yt ~ ii(m,s2) for each observation t = 1,2,...
That is, Yt is an individually independent data generating process with m mean and constant variance s2:
E(Yt) = m
Var(Yt) = g0 = s2
Cov(Yt,Ys) = 0, t ¹ s.
Yt = a + r1Yt-1 + r2Yt-2 + ... + rpYt-p + et
where r1, r2, ..., rp lie outside the unit circle of the p-th order polynomial function of B (ie. 1 - r1B - r2B2 - ... - rpBp = 0); and et ~ ii(0,s2), t = 1,2,...
Yt = m - q1et-1 - q2et-2 - ... - qqet-q + et
where q1, q2, ..., qp lie outside the unit circle of the q-th order polynomial function of B (ie. 1 - q1B - q2B2 - ... - qpBq = 0); and et ~ ii(0,s2), t = 1,2,...
Yt = d + r1Yt-1 + r2Yt-2 + ... + rpYt-p - q1et-1 - q2et-2 - ... - qqet-q + et
where r1, r2, ..., rp lie outside the unit circle of the p-th order polynomial function of B (ie. 1 - r1B - r2B2 - ... - rpBp = 0); q1, q2, ..., qq lie outside the unit circle of the q-th order polynomial function of B (ie. 1 - q1B - q2B2 - ... - qpBq = 0); and et ~ ii(0,s2), t = 1,2,...
Yt = a + bt + et where et ~ ii(0,s2), t = 1,2,.... Then
E(Yt) = a + bt
Var(Yt) = s2
As t ®¥, E(Yt) ®¥. This is the model with linear trend in the mean.
Yt = Yt-1 + et where et ~ ii(0,s2), t = 1,2,.... Equivalently,
Yt = Y0 + åi=1,2,...,t ei
Assuming Y0 exists and finite,
E(Yt) = Y0
Var(Yt) = ts2
As t ®¥, Var(Yt) ®¥.
This is the model with linear trend in the variance.
Yt = a + Yt-1 + et where et ~ ii(0,s2), t = 1,2,.... Equivalently,
Yt = Y0 + at + åi=1,2,...,t ei
Assuming Y0 exists and finite,
E(Yt) = Y0 + at
Var(Yt) = ts2
As t ®¥, E(Yt) ®¥ and
Var(Yt) ®¥.
This is the model with linear trend in the mean and variance.
Yt = a + bt + Yt-1 + et where et ~ ii(0,s2), t = 1,2,.... Equivalently,
Yt = Y0 + a t + b t2 + åi=1,2,...,t ei
where a = a + b/2 and b = b/2. Assuming Y0 exists and finite,
E(Yt) = Y0 + a t + b t2
Var(Yt) = ts2
As t ®¥, E(Yt) ®¥ and
Var(Yt) ®¥.
This is the model with exponential trend in the mean and
linear trend in the variance.
That is, Yt ~ I(d) if DdYt is stationary, where
DYt = Yt - Yt-1,
D2Yt = DYt - DYt-1, ...
For example, if Yt ~ I(1), then
Yt | = DYt + Yt-1 |
= DYt + DYt-1 + Yt-2 = ... | |
= åj=0,...,t-1DYt-j with a known Y0 |
Similarly, if Yt ~ I(2), then
DYt-j =
åi=0,...,t-j-1D2Yt-j-i and
Yt | = åj=0,...,t-1DYt-j |
= åj=0,...,t-1åi=0,...,t-j-1D2Yt-j-i |
The white noise process is an integrated process of order 0, or I(0). A random walk process is an integrated process of order 1, or I(1).
Yt = a + bt +
et, or
Yt = a + bt +
gt2 + et
If et is stationary, then Yt is a trend stationary process.
Yt = Yt-1 + et, or
DYt =
Yt - Yt-1 = et
If et is stationary, then Yt is a difference stationary process.
Yt = a + Yt-1 + et, or
DYt =
Yt - Yt-1 = a + et
If et is stationary, then Yt is a difference stationary process.
Yt = a + bt +
Yt-1 + et, or
DYt =
Yt - Yt-1 = a + bt +
et
If et is stationary, then Yt is a difference stationary process (DYt is a trend stationary process).
High R2
Low DW (DW ® 0 or
r ® 1)
The purpose of an unit roots test is to statistically test the data generating process for difference stationarity (trend nonstationarity) against trend stationarity. It is a formal test for Random Walk Hypothesis.
Dickey-Fuller (DF) and Augmented Dickey-Fuller (ADF) tests for unit roots (or random walk) depends on:
DYt = (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
Hypothesis | H0: r = 1 H1: r < 1 |
Test Statistic | tr = (p-1)/se(p) p is the estimated r |
Critical Value | ADFtr(I,N,e) |
DYt = a + (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
Hypothesis | H0: r = 1 H1: r < 1 | H0: a = 0,
given r = 1 H1: a ¹ 0 |
Test Statistic | tr = (p-1)/se(p) p is the estimated r | ta = a/se(a) a is the estimated a |
Critical Value | ADFtr(II,N,e) | ADFta(II,N,e) |
DYt = a + bt + (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
Hypothesis | H0: r = 1 H1: r < 1 | H0: a = 0,
given r = 1 H1: a ¹ 0 | H0: b = 0,
given r = 1 H1: b ¹ 0 |
Test Statistic | tr = (p-1)/se(p) p is the estimated r | ta = a/se(a) a is the estimated a | tb = b/se(b) b is the estimated b |
Critical Value | ADFtr(III,N,e) | ADFta(III,N,e) | ADFtb(III,N,e) |
DYt = a + (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
Hypothesis | H0: a = 0, r = 1 H1: not H0 |
Restricted Model | DYt = åj=1,2,...,J rjDYt-j+et |
Test Statistic | Fa,r = (RSSr-RSSur)/2 / RSSur/(N-J-2) |
Critical Value | ADFFa,r(II,N,e) |
DYt = a + bt + (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
Hypothesis | H0: a = 0, b = 0, r = 1 H1: not H0 | H0: b = 0, r = 1 H1: not H0 |
Restricted Model | DYt = åj=1,2,...,J rjDYt-j+et | DYt = a + åj=1,2,...,J rjDYt-j+et |
Test Statistic | Fa,b,r = (RSSr-RSSur)/3 / RSSur/(N-J-3) | Fb,r = (RSSr-RSSur)/2 / RSSur/(N-J-3) |
Critical Value | ADFFa,b,r(III,N,e) | ADFFb,r(III,N,e) |
DYt = a + bt + (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
If r < 1 the stop (no unit root) else continue
If b ¹ 0 then
Step 2:
DYt = a + (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
If r < 1 the stop (no unit root) else continue
If a ¹ 0 then
Step 3:
DYt = (r-1)Yt-1 + åj=1,2,...,J rjDYt-j+et
If r < 1 the stop (no unit root) else conclude (unit root)!
Without loss of generality, let Yt = Zt1 and Xt = [Zt2, ..., ZtM]. Consider the following regression equation:
Yt = a + Xtb + et
In general, if Yt, Xt ~ I(1), then et ~ I(1). If et can be shown to be I(0), then the set of variables [Yt, Xt] cointergrates, and the vector [1 -b]' (or any multiple of it) is called a cointegrating vector. Depending on the number of variables M, there are up to M-1 linearly independent cointegrating vectors. The number of linearly independent cointegrating vectors that exists in [Yt, Xt] is called cointegrating rank.
A simple way to test for cointegration is to apply unit roots test on the residuals of the above regression equation. Let
N = Number of usable sample observations;
K = Number of variables in [Yt,Xt] for cointegration test
The unit roots test for the regression residuals, or the cointegration test, is formulated as follows:
Model I | Det = (r-1)et-1 + ut | Det =
(r-1)et-1 +
åj=1,2,...,J
rt-jDet-j + ut (Augmented Model) |
Hypothesis | H0: r = 1 H1: r < 1 | |
Test Statistic | tr = (p-1)/se(p) where p is the estimate of r | |
Critical Value | EG(I,N,e) | AEG(I,N,e) |
The regression equation may be generalized to include trend as follows:
Yt = a + bt + Xtg + et
J. MacKinnon's table of critical values of cointegration tests for both cointegrating regression with and without trend (named Model 2 and Model 3, respectively) is provided in Table 4. It is based on simulation experiments by means of response surface regression in which critical values depend on the sample size. Therefore, this table is easier and more flexible to use than the original EG and AEG distributions.
Yt = a + Xtb + et
Det =
(r-1)et-1 + ut
where r < 1 and ut is stationary. Therefore the short-run dynamics of the model is
DYt | = DXtb + Det |
= DXtb + (r-1)et-1 + ut | |
= DXtb + (r-1)(Yt-1-a-Xt-1b) + ut | |
This is exactly the Error Correction Model.
Zt = Zt-1P1 + Zt-2P2 + ... + Zt-pPp + P0 + Ut
where Pj, j=1,2,...M, are the MxM parameter matrices, P0 is a 1xM drift or constant vector, and the 1xM error vector Ut ~ normal(0,S) with a constant matrix S = Var(Ut) = E(Ut'Ut) denoting the covariance matrix across M variables.
The VAR(p) system can be transformed using the difference series of the variables, resemble the error correction model representation, as follows:
DZt = DZt-1G1 + DZt-2G2 + ... + DZt-(p-1)Gp-1 + Zt-1P + G0 + Ut
where P = åj=1,2,...,pPj - I, G1 = P1 - P - I , G2 = P2 + G1, ..., and G0 = P0 for notational convenience.
If Zt ~ I(1), then DZt ~ I(0). In order to have the variables in Zt cointegrated, we must have Ut ~ I(0). That is, we must show the term Zt-1P ~ I(0). By definition of cointegration, the parameter matrix P must contains 0 < r < M linearly independent cointegrating vetors such that ZtP ~ I(0). Therefore, the cointegration test amounts to check that Rank(P) = r > 0. If Rank(P) = r, we may impose the parameter restrictions P = -BA' where A and B are Mxr matrices. Given the existence of the constant vector G0, there can be up to M-r random walks or the drift trends. Such common trends in the variables may be removed in the case of Model II below. We consider the following three models:
For model estimation of the above VAR(p) system, where Ut ~ normal(0,S), we derive the log-likelihood function for Model III:
ll(G1,G2,..., Gp-1,G0,P,S) = - MN/2 ln(2p) - N/2 ln|det(S)| - ½ åt=1,2,...,NUtS-1Ut'
Since the maximum likelihood estimate of S is U'U/N, the concentrated log-likelihood function is written as:
ll*(G1,G2,..., Gp-1,G0,P) = - NM/2 (1+ln(2p)-ln(N)) - N/2 ln|det(U'U)|
The actual maximum likelihood estimation can be simplied by considering the following two auxilary regressions:
Returning to the concentrated log-likelihood function, it is now written as
ll*(W(F1,F2,...,Fp-1,F0),
V(Y1,Y2,...,Yp-1,Y0),P)
= - NM/2 (1+ln(2p)-ln(N)) -
N/2 ln|det((W-VP)'(W-VP))|
Maximizing the above concentrated log-likelihood function is equivalent to minimize the sum-of-squares term det((W-VP)'(W-VP)). Conditional to W(F1,F2,...,Fp-1,F0) and V(Y1,Y2,...,Yp-1,Y0), the least squares estimate of P is (V'V)-1V'W. Thus,
det((W-VP)'(W-VP))
= det(W(I-V(V'V)-1V')W')
= det((W'W)(I-(W'W)-1(W'V)(V'V)-1(V'W))
= det(W'W) det(I-(W'W)-1(W'V)(V'V)-1(V'W))
= det(W'W) (Õi=1,2,...,M(1-li))
where l1, l2, ..., lM are the ascending ordered eigenvalues of the matrix (W'W)-1(W'V)(V'V)-1(V'W). Therefore the resulting double concentrated log-likelihood function (concentrating on both S and P) is
ll**(W(F1,F2,...,Fp-1,F0),
V(Y1,Y2,...,Yp-1,Y0))
= - NM/2 (1+ln(2p)-ln(N)) -
N/2 ln|det(W'W)| - N/2
åi=1,2,...,Mln(1-li)
Given the parameter constraints that there are 0 < r < M cointegrating vectors, that is P = -BA' where A and B are Mxr matrices, the restricted concentrated log-likelihood function is similarily derived as follows:
llr**(W(F1,F2,...,Fp-1,F0),
V(Y1,Y2,...,Yp-1,Y0))
= - NM/2 (1+ln(2p)-ln(N)) -
N/2 ln|det(W'W)| - N/2
åi=1,2,...,rln(1-li)
Therefore, with the degree of freedom M-r, the likelihood ratio test statistic for at least r cointegrating vectors is
-2(llr** - ll**) = -N åi=r+1,2,...,Mln(1-li)
Similarly the likelihood ratio test statistic for r cointegrating vectors against r+1 vectors is
-2(llr** - llr+1**) = -N ln(1-lr+1)
A more general form of the likelihood ratio test statistic for r1 cointegrating vectors against r2 vectors (0 £ r1 < r2 £ M) is
-2(llr1** - llr2**) = -N åi=r1+1,2,...,r2ln(1-li)
The following table summarizes the two popular cointegration test statistics: Eigenvalue Test Statistic l(r) and Trace Test Statistic ltrace(r). For the case of r = 0, they are the tests for no cointegration.
Cointegrating Rank (r) | H0: r1 = r H1: r2 = r+1 | H0: r1 = r H1: r2 = M |
0 | -N ln(1-l1) | -N åi=1,2,...,Mln(1-li) |
1 | -N ln(1-l2) | -N åi=2,3,...,Mln(1-li) |
... | ... | ... |
M-1 | -N ln(1-lM) | -N ln(1-lM) |
Critical Value | l(r) | ltrace(r) |
where et ~ ii(0,s2), t = 1,2,...,N
Bartlett Test
Box-Pierece Test and Ljung-Box Test
In order to use all N data observations, initialization may be needed for the following:
Y0, Y-1, ..., Y-p+1
e0, e-1, ...,
e-q+1
The model may be written as
r(B)Yt = d + q(B)et, or
q(B)-1(-d+r(B)Yt) = et ~ ii(0,s2)
where
r(B) = 1 - r1B -
r2B2 - ... -
rpBp,
q(B) = 1 - q1B -
q2B2 - ... -
qqBq.
Conditional to the historical information (YN, ..., Y1), and data initialization (Y0, ..., Y-p+1), (e0, ..., e-q+1), the sum-of-squares is defined by
S = åt=1,2,...,Net2
Assuming ei ~ normal(0,s2) for each observation i, the concentrated log-likelihood function is
ll = -N/2 (1+ln(2p)-ln(N)+ln(S))
where ut ~ ii(0,s2) or normal(0,s2), t = 1,2,...,N.
Model analysis including model identification, estimation, and forecasting is the same as univariate ARMA analysis. Regression parameters b and ARMA parameters r and q must be simultaneously estimated through iterations of nonlinear functional (sum-of-squares or log-likelihood) optimization.
AR(1): e = r e-1 + u
We assume u ~ normal(0,s2I) ,and |r| < 1 for model stability. The subscript -1 indicates the one-period lag of the data involved. It is clear that s2 = Var(ui) = (1-r2) Var(e). Denote the variable transformations y* = y - r y-1 and x* = x - r x-1. Since u1 = (1-r2)½ e1, the otherwise lost first observation is kept with the transformations y1* = (1-r2)½y1 and x1* = (1-r2)½x1.
Thus model for estimation is
AR(1): u = y* - x*b
with the following Jacobian transformation from ui to yi (depending on r only):
Ji(r) = |¶ui / ¶yi| = | (1-r2)½ for i=1 | |
1 | for i>1 |
Finally, the concentrated log-likelihood function is:
ll*(b,r|y,x) = -½N (1+ln(2p)-ln(N)) +½ ln(1-r2) -½N ln(u'u)
Extension: AR(2) The model is defined as e = r1e-1 + r2e-2 + u with the following proper data transformation (z is referenced as either x or y below):
MA(1): e = u - qu-1
Again, we assume u ~ normal(0,s2I) ,and |q| < 1 for model stability. Equivalently,
MA(1): u = y - xb - qu-1
Since each log-jacobian terms vanish in this case, the concentrated log-likelihood function is simply
ll*(b,q|y,x) = -½N (1+ln(2p)-ln(N)) -½N ln(u'u)
Notice that the one-period lag of error terms, u-1, is used to define the model error u. A recursive calculation is needed with proper initialization of u0. For example, set the initial value u0 = E(u) = 0, then u1 = y1-x1b and ui = yi-xib + ui-1 for i=2,...,N.
ARMA(1,1): e = r e-1 + u - q u-1
This is the mixed case of AR(1) and MA(1). Using the variable transformations as of AR(1) and data initialization as of MA(1), the model is represented as
ARMA(1,1): u = y* - x*b - q u-1
The concentrated log-likelihood function for parameter estimation is
ll*(b,r,q|y,x) = -½N (1+ln(2p)-ln(N)) +½ ln(1-r2) -½N ln(u'u)
Based on the U. S. investment data from Greene's Table 13.1 (1999, p. 525), formulate and estimate the three models of autocorrelation for a linear real investment relationship with real GNP and real interest rate (Program and Data):
Invest = b0 + b1 Rate + b2 GNP + e
Consider the time series model:
Yt = Xtb + et
At time t, conditional to the available historical information Ht, we assume that the error structure follows a normal distribution:
et|Ht ~ n(0,s2t)
where s2t | = a0 + d1s2t-1 + ... + dps2t-p + a1e2t-1 + ... + aqe2t-q |
= a0 + Si=1,2,...pdis2t-i + Sj=1,2,...qaje2t-j |
This is the general specification of autoregressive conditional heteroscedasticity, or GARCH(p,q), according to Bollerslev [1986]. If p = 0, then it is the GARCH(0,q) or simply ARCH(q) process:
s2t = a0 + Sj=1,2,...qaje2t-j
The simplest case is q = 1, or ARCH(1), originated in Engle [1982] as follows:
s2t = a0 + a1e2t-1
ARCH(1) Process
ARCH(1) model can be summarized as follows:
Yt = Xtb + et
et = ut(a0 + a1e2t-1)½
where ut ~ n(0,1)
Then, the conditional means E(et|et-1) = 0 and the conditional variances s2t = E(e2t|et-1) = a0 + a1e2t-1
Note that the unconditional variances of et are E(e2t) = E(E(e2t|et-1)) = a0 + a1E(e2t-1). If s2 = E(e2t) = E(e2t-1), then s2 = a0/(1-a1) provided that |a1| < 1. Therefore, the model may be free of general heteroscedasticity although the conditional heteroscedasticity is assumed.
Generalizations
ei|ei-1 ~ normal(0,si2), for each observation i.
More specifically, we write ei = si ui where ui ~ normal(0,1). The expected value E(ei|ei-1) = 0 and the variance Var(ei|ei-1) = E(ei2|ei-1) = si2. We note that the unconditional variance may be homoscedastic. This is the phenomenon of autocorrelation in variance typically found in financial time series.
Recall the normal log-likelihood of a heteroscedastic regression model:
ll = -½N ln(2p) - ½ Si=1,2,...,Nln(si2) - ½ Si=1,2,...,N(ei2 / si2)
The first-order of AutoRegressive Conditional Heteroscedasticity is described by the following conditional variance process:
ARCH(1): si2 = d + q ei-12
Conditional to the starting value e02 = E(ei2) = Si=1,2,...,Nei2/N, and the stability requirements: d > 0 and 0 £ q < 1, the log-likelihood function for model estimation is
ll(b,d,q|y,x) = -½N ln(2p) - ½ Si=1,2,...,Nln(d + q ei-12) - ½ Si=1,2,...,N(ei2 / (d + q ei-12))
It can be generalized (therefore the name Generalized AutoRegressive Conditional Heteroscedasticity) to:
GARCH(1,1): si2 = d + q ei-12 + r s2i-1
This resembles the mixed autoregressive moving-average process ARMA(1,1) as described in autocorrelation. Presample variances and squared error terms can be initialized with Si=1,2,...,Nei2/N. The following parameter restrictions are necessary to preserve stationaity of the error process:
Another extension is ARCH in mean or ARCH-M model which adds the heteroscedastic variance term directly into the regression equation (assuming linear model):
ARCH-M(1):
ei = F(yi,xi,b,si2) = yi - xib - gsi2
si2 = d + q ei-12
The last variance term of the regression may be expressed in log form. That is, yi = xib + gln(si2) + ei. Moreover, constraints on the ARCH-M terms may be reqired to ensure the positivity of variances:
This example is taken from Greene [1997, 3rd Ed.]. It may be revised later to be based on Greene [1999, 4th Ed.], Example 18.11 (Data from Table A18.2).
Based on the U. S. inflation data from Greene's Table 12.13 (p. 572), consider a dynamic process of inflation (Program and Data):
Dpi = b1 + b1 Dpi-1 + ei
where Dpi denotes the inflation rate for i from 1941 to 1986. Formulate and estimate the model with autoregressive conditional heteroscedastic error process ARCH(1), ARCH-M(1), and GARCH(1,1) respectively. Compare the results with Greene's Example 12.16 (p. 572).
Probabilty to the Right of Critical Value Model Statistic N 99% 97.5% 95% 90% 10% 5% 2.5% 1% I ADFtr 25 -2.66 -2.26 -1.95 -1.60 0.92 1.33 1.70 2.16 50 -2.62 -2.25 -1.95 -1.61 0.91 1.31 1.66 2.08 100 -2.60 -2.24 -1.95 -1.61 0.90 1.29 1.64 2.03 250 -2.58 -2.23 -1.95 -1.61 0.89 1.29 1.63 2.01 500 -2.58 -2.23 -1.95 -1.61 0.89 1.28 1.62 2.00 >500 -2.58 -2.23 -1.95 -1.61 0.89 1.28 1.62 2.00 II ADFtr 25 -3.75 -3.33 -3.00 -2.62 -0.37 0.00 0.34 0.72 50 -3.58 -3.22 -2.93 -2.60 -0.40 -0.03 0.29 0.66 100 -3.51 -3.17 -2.89 -2.58 -0.42 -0.05 0.26 0.63 250 -3.46 -3.14 -2.88 -2.57 -0.42 -0.06 0.24 0.62 500 -3.44 -3.13 -2.87 -2.57 -0.43 -0.07 0.24 0.61 >500 -3.43 -3.12 -2.86 -2.57 -0.44 -0.07 0.23 0.60 III ADFtr 25 -4.38 -3.95 -3.60 -3.24 -1.14 -0.80 -0.50 -0.15 50 -4.15 -3.80 -3.50 -3.18 -1.19 -0.87 -0.58 -0.24 100 -4.04 -3.73 -3.45 -3.15 -1.22 -0.90 -0.62 -0.28 250 -3.99 -3.69 -3.43 -3.13 -1.23 -0.92 -0.64 -0.31 500 -3.98 -3.68 -3.42 -3.13 -1.24 -0.93 -0.65 -0.32 >500 -3.96 -3.66 -3.41 -3.12 -1.25 -0.94 -0.66 -0.33 Probabilty to the Right of Critical Value Model Statistic N 1% 2.5% 5% 10% (Symmetric Distribution, given r = 1) II ADFta 25 3.14 2.97 2.61 2.20 50 3.28 2.89 2.56 2.18 100 3.22 2.86 2.54 2.17 250 3.19 2.84 2.53 2.16 500 3.18 2.83 2.52 2.16 >500 3.18 2.83 2.52 2.16 III ADFta 25 4.05 3.59 3.20 2.77 50 3.87 3.47 3.14 2.78 100 3.78 3.42 3.11 2.73 250 3.74 3.39 3.09 2.73 500 3.72 3.38 3.08 2.72 >500 3.71 3.38 3.08 2.72 III ADFtb 25 3.74 3.25 2.85 2.39 50 3.60 3.18 2.81 2.38 100 3.53 3.14 2.79 2.38 250 3.49 3.12 2.79 2.38 500 3.48 3.11 2.78 2.38 >500 3.46 3.11 2.78 2.38
Probabilty to the Right of Critical Value Model Statistic N 1% 2.5% 5% 10% 90% 95% 97.5% 99% II ADFFa,r 25 7.88 6.30 5.18 4.12 0.65 0.49 0.38 0.29 50 7.06 5.80 4.86 3.94 0.66 0.50 0.30 0.29 100 6.70 5.57 4.71 3.86 0.67 0.50 0.30 0.29 250 6.52 5.45 4.63 3.81 0.67 0.51 0.39 0.30 500 6.47 5.41 4.61 3.79 0.67 0.51 0.39 0.30 >500 6.43 5.38 4.59 3.78 0.67 0.51 0.40 0.30 III ADFFa,b,r 25 8.21 6.75 5.68 4.67 1.10 0.89 0.75 0.61 50 7.02 5.94 5.13 4.31 1.12 0.91 0.77 0.62 100 6.50 5.59 4.88 4.16 1.12 0.92 0.77 0.63 250 6.22 5.40 4.75 4.07 1.13 0.92 0.77 0.63 500 6.15 5.35 4.71 4.05 1.13 0.92 0.77 0.63 >500 6.09 5.31 4.68 4.03 1.13 0.92 0.77 0.63 III ADFFb,r 25 10.61 8.65 7.24 5.91 1.33 1.08 0.90 0.74 50 9.31 7.81 6.73 5.61 1.37 1.11 0.93 0.76 100 8.73 7.44 6.49 5.47 1.38 1.12 0.94 0.76 250 8.43 7.25 6.34 5.39 1.39 1.13 0.94 0.76 500 8.34 7.20 6.30 5.36 1.39 1.13 0.94 0.76 >500 8.27 7.16 6.25 5.34 1.39 1.13 0.94 0.77
Augumented Model I (EG) Model I (AEG) Probabilty to the Right of Critical Value N K 99% 95% 90% 99% 95% 90% 50 2 -4.32 -3.67 -3.28 -4.12 -3.29 -2.90 100 2 -4.07 -3.37 -3.03 -3.73 -3.17 -2.91 200 2 -4.00 -3.37 -3.02 -3.78 -3.25 -2.98 50 3 -4.84 -4.11 -3.73 -4.45 -3.75 -3.36 100 3 -4.45 -3.93 -3.59 -4.22 -3.62 -3.32 200 3 -4.35 -3.78 -3.47 -4.34 -3.78 -3.51 50 4 -4.94 -4.35 -4.02 -4.61 -3.98 -3.67 100 4 -4.75 -4.22 -3.89 -4.61 -4.02 -3.71 200 4 -4.70 -4.18 -3.89 -4.72 -4.13 -3.83 50 5 -5.41 -4.76 -4.42 -4.80 -4.15 -3.85 100 5 -5.18 -4.58 -4.26 -4.98 -4.36 -4.06 200 5 -5.02 -4.48 -4.18 -4.97 -4.43 -4.14
Critical values for unit root and cointegration tests can be computed from the equation:
CV(K, Model, N, sig) = b + b1*(1/N) + b2*(1/N)2
Notation:
Regression Model: 1=no constant; 2=no trend; 3=with trend;
K: Number of variables in cointegration tests (K=1 for unit root test);
N: Number of observations or sample size;
sig: Level of significance, 0.01, 0.05, 0.1.
Source:
J. G. MacKinnon, "Critical Values for Cointegration Tests," Cointegrated Time
Series, 267-276.
K Model sig b b1 b2 1 1 0.01 -2.5658 -1.960 -10.04 1 1 0.05 -1.9393 -0.398 0.00 1 1 0.10 -1.6156 -0.181 0.00 1 2 0.01 -3.4335 -5.999 -29.25 1 2 0.05 -2.8621 -2.738 -8.36 1 2 0.10 -2.5671 -1.438 -4.48 1 3 0.01 -3.9638 -8.353 -47.44 1 3 0.05 -3.4126 -4.039 -17.83 1 3 0.10 -3.1279 -2.418 -7.58 2 2 0.01 -3.9001 -10.534 -30.03 2 2 0.05 -3.3377 -5.967 -8.98 2 2 0.10 -3.0462 -4.069 -5.73 2 3 0.01 -4.3266 -15.531 -34.03 2 3 0.05 -3.7809 -9.421 -15.06 2 3 0.10 -3.4959 -7.203 -4.01 3 2 0.01 -4.2981 -13.790 -46.37 3 2 0.05 -3.7429 -8.352 -13.41 3 2 0.10 -3.4518 -6.241 -2.79 3 3 0.01 -4.6676 -18.492 -49.35 3 3 0.05 -4.1193 -12.024 -13.13 3 3 0.10 -3.8344 -9.188 -4.85 4 2 0.01 -4.6493 -17.188 -59.20 4 2 0.05 -4.1000 -10.745 -21.57 4 2 0.10 -3.8110 -8.317 -5.19 4 3 0.01 -4.9695 -22.504 -50.22 4 3 0.05 -4.4294 -14.501 -19.54 4 3 0.10 -4.1474 -11.165 -9.88 5 2 0.01 -4.9587 -22.140 -37.29 5 2 0.05 -4.4185 -13.461 -21.16 5 2 0.10 -4.1327 -10.638 -5.48 5 3 0.01 -5.2497 -26.606 -49.56 5 3 0.05 -4.7154 -17.432 -16.50 5 3 0.10 -4.4345 -13.654 -5.77 6 2 0.01 -5.2400 -26.278 -41.65 6 2 0.05 -4.7048 -17.120 -11.17 6 2 0.10 -4.4242 -13.347 0.00 6 3 0.01 -5.5127 -30.735 -52.50 6 3 0.05 -4.9767 -20.883 -9.05 6 3 0.10 -4.6999 -16.445 0.00
Probabilty to the Right of Critical Value Model M-r 99% 97.5% 95% 90% 80% 50% l 1 1 6.51 4.93 3.84 2.86 1.82 0.58 1 2 15.69 13.27 11.44 9.52 7.58 4.83 1 3 22.99 20.02 17.89 15.59 13.31 9.71 1 4 28.82 26.14 23.80 21.58 18.97 14.94 1 5 35.17 32.51 30.04 27.62 24.83 20.16 2 1 11.576 9.658 8.083 6.691 4.905 2.415 2 2 18.782 16.403 14.595 12.783 10.666 7.474 2 3 16.154 23.362 21.279 18.959 16.521 12.707 2 4 32.616 29.599 27.341 24.917 22.341 17.875 2 5 38.858 35.700 33.262 30.818 27.953 23.132 3 1 6.936 5.332 3.962 2.816 1.699 0.447 3 2 17.936 15.810 14.036 12.099 10.125 6.852 3 3 25.521 23.002 20.778 18.697 16.324 12.381 3 4 31.943 29.335 27.169 24.712 22.113 17.719 3 5 38.341 35.546 33.178 30.774 27.899 23.211 ltrace 1 1 6.51 4.93 3.84 2.86 1.82 0.58 1 2 16.31 14.43 12.53 10.47 8.45 5.42 1 3 29.75 26.64 24.31 21.63 18.83 14.30 1 4 45.58 42.30 39.89 36.58 33.16 27.10 1 5 66.52 62.91 59.46 55.44 51.13 43.79 2 1 11.586 9.658 8.083 6.691 4.905 2.415 2 2 21.962 19.611 17.844 15.583 13.038 9.355 2 3 37.291 34.062 31.256 28.436 25.445 20.188 2 4 55.551 51.801 48.419 45.248 41.623 34.873 2 5 77.911 73.031 69.977 65.956 61.566 53.373 3 1 6.936 5.332 3.962 2.816 1.699 0.447 3 2 19.310 17.299 15.197 13.338 11.164 7.638 3 3 35.397 32.313 29.509 26.791 23.868 18.759 3 4 53.792 50.424 47.181 43.964 40.250 33.672 3 5 76.955 72.140 68.905 65.063 60.215 52.588