Search This Blog



Stationarity and Unit Root Testing
l  The stationarity or otherwise of a series can strongly influence its behaviour and properties - e.g. persistence of shocks will be infinite for nonstationary series
l  Spurious regressions. If two variables are trending over time, a regression of one on the other could have a high R2 even if the two are totally unrelated
l  If the variables in the regression model are not stationary, then it can be proved that the standard assumptions for asymptotic analysis will not be valid. In other words, the usual “t-ratios” will not follow a t-distribution, so we cannot validly undertake hypothesis tests about the regression parameters.

Stationary and Non-stationary Time Series
  1. Stationary Time Series
l  A series is said to be stationary if the mean and autocovariances of the series do not depend on time.
(A) Strictly Stationary :
n  For a strictly stationary time series the distribution of  y(t) is independent of t.  Thus it is not just the mean and variance that are constant.  All higher order moments are independent of t.
 (B) Weakly Stationary
n  A time series is said to be weakly stationary if its mean in constant and its autocovariances function depends only on the lag. No assumptions are made about higher order moments.
  1. Nonstationarity  
n  Most of the time series we encounter are nonstationary.  Any series that is not stationary is said to be nonstationary. A simple nonstaionary time-series model is                   , where the mean      is a function of time and      is a weakly stationary series.

The Random Walk (A nonstationary series)
1.            The random walk is a difference stationary series since the first difference of y is stationary:
2.            A difference stationary series is said to be integrated and is denoted as I(d) where d is the order or integration.
3.            The order of integration is the number of unit roots contained in the series, or the number of differencing operations it takes to make the series stationary.  Each integration order corresponds to differencing the series being forecast.  A first-order integrated component means that the forecasting model is designed for the first difference of the original series. For example, a stationary series is I(0). The random walk is a nonstationary with one unit root: I(1).

Unit Root Tests
l  Standard inference procedures do not apply to regressions which contain an integrated dependent variable or integrated regressors.  Therefore, it is important to check whether a series is stationary or not before using it in a regression.  The formal method to test the stationarity of a series is the unit root test.
  1. The Dickey-Fuller (DF) Test
  2. The Augmented Dickey-Fuller Test (ADF)
  3. The Phillips-Peron (PP) Test
As with the ADF test, you have to specify whether to include a constant, a constant and linear trend, or neither in the test regression.
l  The DF and ADF tests are frequently used in testing for unit roots although there are several problems (size distortions and low power).  With the ADF test there is the problem of selection of lag length.  AIC and SBC are used often but they have been found to select a low value of the lag length k.
The Phillips-Peron (PP) Test
l  Phillips and Perron (1988) propose a nonparametric method of controlling for higher-order serial correlation in a series.  The test regression for the PP test is the AR(1) process:
l  while the ADF test corrects for higher order serial correlation by adding lagged differenced terms on the right-hand side, the PP test makes a correction to the t-statistic of the r coefficient from the AR(1) regression to account for the serial correlation in εt.

Performing Unit Root Tests
c) Include in test equation: Intercept, Trend and intercept, None. Note that the choice is important since the distribution of the test statistic under the null hypothesis differs among these three cases. After running a unit root test, you should examine the estimated test regression reported by EViews, especially if you are not sure about the lag structure or deterministic trend in the series.  You may want to re-run the test equation with a different selection of right-hand variables (add or delete the constant, trend, or lagged differences) or lag order.
d) Lagged differences.
l  The null hypothesis of a unit root is rejected against the one-sided alternative if the t-statistic (absolute value) is less than the critical value. We reject the null hypothesis of a unit root in the CS at any of the reported significance levels.
l  For the ADF test, the test statistic is the t-statistic for the lagged dependent variable in the test regression reported at the bottom part of the table.  For the PP test, the test statistic is a modified t-statistic as described above.

Characteristics of I(0), I(1) and I(2) Series
l  An I(2) series contains two unit roots and so would require differencing twice to induce stationarity.
l  I(1) and I(2) series can wander a long way from their mean value and cross this mean value rarely.
l  I(0) series should cross the mean frequently.
l  The majority of economic and financial series contain a single unit root, although some are stationary and consumer prices have been argued to have 2 unit roots.

Criticism of Dickey-Fuller and Phillips-Perron-type tests
·         Main criticism is that the power of the tests is low if the process is stationary but with a root close to the non-stationary boundary. e.g. the tests are poor at deciding if
                                                                f=1 or f=0.95,
                especially with small sample sizes.
·         If the true data generating process (dgp) is
                                                                yt = 0.95yt-1 + ut
                then the null hypothesis of a unit root should be rejected.
·         One way to get around this is to use a stationarity test as well as the unit root tests we have looked at.

Presented by Dr. Babar Zaheer Butt to the MS/Ph.D students at Iqra University Islamabad