Topic 1. Covariance Stationarity Conditions
Topic 2. Autocovariance and Autocorrelation Functions
Topic 3. White Noise
Topic 4. Time Series Forecasting
Time Series: A time series is data collected over regular time periods (e.g., monthly S&P 500 returns, quarterly dividends paid by a company, etc.).
Time series data have trends (the component that changes over time), seasonality (systematic change that occur at specific times of the year), and cyclicality (changes occurring over time cycles).
The cyclical component is our core focus for this chapter. It can be decomposed into shocks and persistence components.
Covariance Stationary Time Series: A time series in which the relationships among its present and past values to remain stable over time.
For a time series to be covariance stationary, it must exhibit three properties:
Its mean must be stable over time.
Its variance must be finite and stable over time.
Its covariance structure must be stable over time.
Covariance Structure: Refers to the covariances among the values of a time series at its various lags.
Q1. The conditions for a time series to exhibit covariance stationarity are least likely to include:
A. a stable mean.
B. a finite variance.
C. a finite number of observations.
D. autocovariances that do not depend on time.
Explanation: C is correct.
In theory, a time series can be infinite in length and still be covariance stationary. To be covariance stationary, a time series must have a stable mean, a stable covariance structure (i.e., autocovariances depend only on displacement, not on time), and an infinite variance.
Autocovariance Function: The covariance between the current value of a time series and its value τ periods in the past is its autocovariance at lag τ.
Autocorrelation Function (ACF): To convert an autocovariance function to an ACF, divide the autocovariance at each τ by the variance of the time series.
Q2. As the number of lags or displacements becomes large, autocorrelation functions (ACFs) will approach:
A. −1.
B. 0.
C. 0.5.
D. +1.
Explanation: B is correct.
One feature that all ACFs have in common is that autocorrelations approach zero as the number of lags or displacements gets large.
Q3. Which of the following statements about white noise is most accurate?
A. All serially uncorrelated processes are white noise.
B. All Gaussian white noise processes are independent white noise.
C. All independent white noise processes are Gaussian white noise.
D. All serially correlated Gaussian processes are independent white noise.
Explanation: B is correct.
If a white noise process is Gaussian (i.e., normally distributed), it follows that the process is independent white noise. However, the reverse is not true; there can be independent white noise processes that are not normally distributed. Only those serially uncorrelated processes that have a zero mean and constant variance are white noise.
This expression can be applied to any covariance stationary series, it is known as a general linear process.
Topic 1. Autoregressive Processes
Topic 2. Estimating Autoregressive Parameters using Yule-Walker Equation
Topic 3. Moving Average (MA) Processes
Topic 4. Lag Operators
Q4. Which of the following conditions is necessary for an autoregressive (AR) process to be covariance stationary?
A. The value of the lag slope coefficients should add to 1.
B. The value of the lag slope coefficients should all be less than 1.
C. The absolute value of the lag slope coefficients should be less than 1.
D. The sum of the lag slope coefficients should be less than 1.
Explanation: D is correct.
In order for an AR process to be covariance stationary, the sum of each of the slope coeficients should be less than 1.
For any value beyond the first lagged error term, the autocorrelation will be zero in an MA(1) process. It is one condition of being covariance stationary.
A more general form, MA(q), incorporates q lags:
Mean of MA(q) is still μ but the variance,
Q5. Which of the following statements is a key differentiator between a moving average (MA) representation and an autoregressive (AR) process?
A. An MA representation shows evidence of autocorrelation cutoff.
B. An AR process shows evidence of autocorrelation cutoff.
C. An unadjusted MA process shows evidence of gradual autocorrelation decay.
D. An AR process is never covariance stationary.
Explanation: A is correct.
A key difference between an MA representation and an AR process is that the MA process shows autocorrelation cutoff while an AR process shows a gradual decay in autocorrelations.
Q6. Assume in an autoregressive [AR(1)] process that the coefficient for the lagged observation of the variable being estimated is equal to 0.75. According to the Yule-Walker equation, what is the second-period autocorrelation?
A. 0.375.
B. 0.5625.
C. 0.75.
D. 0.866.
Explanation: B is correct.
The coefficient is equal to 0.75, so using the concept derived from the Yule-Walker equation, the first-period autocorrelation is 0.75 (i.e., ), and the second- period autocorrelation is 0.5625.
Q7. Which of the following statements is most likely a purpose of the lag operator?
A. A lag operator ensures that the parameter estimates are consistent.
B. An autoregressive (AR) process is covariance stationary only if its lag polynomial is invertible.
C. Lag polynomials can be multiplied.
D. A lag operator ensures that the parameter estimates are unbiased.
Explanation: B is correct.
There are two main purposes of using a lag operator. First, an AR process is covariance stationary only if its lag polynomial is invertible. Second, this invertibility is used in the Box-Jenkins methodology to select the appropriate time series model.
Topic 1. Autoregressive Moving Average (ARMA) Processes
Topic 2. Application of AR, MA, and ARMA Processes
Topic 3. Sample and Partial Autocorrelations
Topic 4. Testing Autocorrelations
Topic 5. Modeling Seasonality in an ARMA
Q8. Which of the following statements about an autoregressive moving average (ARMA) process is correct?
I. It involves autocorrelations that decay gradually.
II. It combines the lagged unobservable random shock of the MA process with the observed lagged time series of the AR process.
A. I only.
B. II only.
C. Both I and II.
D. Neither I nor II.
Explanation: C is correct.
The ARMA process is important because its autocorrelations decay gradually and because it captures a more robust picture of a variable being estimated by including both lagged random shocks and lagged observations of the variable being estimated. The ARMA model merges the lagged random shocks from the MA process and the lagged time series variables from the AR process.
Q9. Which of the following statements is correct regarding the usefulness of an
autoregressive (AR) process and an autoregressive moving average (ARMA) process when modeling seasonal data?
I. They both include lagged terms and, therefore, can better capture a relationship
in motion.
II. They both specialize in capturing only the random movements in time series data.
A. I only.
B. II only.
C. Both I and II.
D. Neither I nor II
Explanation: A is correct.
Both AR models and ARMA models are good at forecasting with seasonal patterns because they both involve lagged observable variables, which are best for capturing a relationship in motion. It is the moving average representation that is best at capturing only random movements.
Q10. To test the hypothesis that the autocorrelations of a time series are jointly equal to zero based on a small sample, an analyst should most appropriately calculate:
A. a Ljung-Box (LB) Q-statistic.
B. a Box-Pierce (BP) Q-statistic.
C. either a Ljung-Box (LB) or a Box-Pierce (BP) Q-statistic.
D. neither a Ljung-Box (LB) nor a Box-Pierce (BP) Q-statistic.
Explanation: A is correct.
The LB Q-statistic is appropriate for testing this hypothesis based on a small sample.