Error Statistics for the Survey of Professional Forecasters for CPI Inflation Rate
Release Date: 05/24/2024
Tom Stark
Assistant Vice President | Assistant Director
Real-Time Data Research Center
Economic Research Department
Federal Reserve Bank of Philadelphia
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
1. OVERVIEW.
This document reports error statistics for median projections from the Survey of Professional
Forecasters (SPF), conducted since 1990 by the Federal Reserve Bank of Philadelphia. We provide
the results in a series of tables and, in the PDF version of this document, a number of charts. The
tables show the survey variable forecast and, importantly, the transformation of the data that we used to
generate the statistics. (The transformation is usually a quarter-over-quarter growth rate, expressed
in annualized percentage points. However, some variables, such as interest rates, the unemployment rate,
and housing starts are untransformed and, thus, expressed in their natural units.)
The paragraphs below explain the format of the tables and charts and the methods used to compute the
statistics. These paragraphs are general. The same discussion applies to all variables in the survey.
2. DESCRIPTION OF TABLES.
Tables 1A-1B report error statistics for various forecast horizons, sample periods, and choices of the
real-time historical value that we used to assess accuracy. In each quarterly survey, we ask our
panelists for their projections for the current quarter and the next four quarters. The current quarter
is defined as the quarter in which we conducted the survey. Our tables provide error statistics separately
for each quarter of this five-quarter horizon, beginning with the current quarter (denoted H = 1) and ending
with the quarter that is four quarters in the future (H = 5). For each horizon, we report the mean forecast
error [ME(S)], the mean absolute forecast error [MAE(S)], and the root-mean-square error [RMSE(S)].
All are standard measures of accuracy, though the academic literature generally places the most weight on
the latter.
We define a forecast error as the difference between the historical value and the forecast. The mean error
for each horizon is simply the average of the forecast errors at that horizon, constructed over the sample
periods shown in Table 1A. Other things the same, a forecast with a mean error close to zero is better than
one with a mean error far from zero. The mean absolute error is the sample average of the absolute value
of the errors. Many analysts prefer this measure to the mean error because it does not allow large positive
errors to offset large negative errors. In this sense, the mean absolute error gives a cleaner estimate
of the size of the errors. Decision makers, however, may care not only about the average size of the
errors but also about their variability, as measured by variance. Our last measure of accuracy is one that
reflects the influence of the mean error and the variance of the error. The root-mean-square error for
the SPF [RMSE(S)], the measure most often used by analysts and academicians, is the square root of the
the average squared error. The lower the root-mean-square error, the more accurate the forecast.
2.1. Benchmark Models.
The forecast error statistics from the SPF are of interest in their own right. However, it is often more
interesting to compare such statistics with those of alternative, or benchmark, forecasts. Tables 1A-1B
report four such comparisons. They show the ratio of the root-mean-square error of the SPF forecast to that
of four benchmark models. The benchmark models are statistical equations that we estimate on the data.
We use the equations to generate projections for the same horizons included in the survey. In effect, we
imagine standing back in time at each date when a survey was conducted and generating a separate forecast
with each benchmark model. We do this in the same way that a survey panelist would have done using his own
model.
Table 1A reports the root-mean-square-error ratios using as many observations as possible for each model.
The number of observations can differ from model to model. We first compute the RMSE for each model. We
then construct the ratio.
Table 1B reports RMSE ratios after we adjust the samples to include only the observations common to
both models in the pair. Accordingly, the ratios reported in Table 1B may differ slightly from
those of Table 1A, depending on the availability of sufficient real-time observations for estimating
the benchmark models or for computing the errors of the SPF or benchmark forecasts. Table 1B also reports
three two-sided p-values for each ratio. The p-values, corrected for the presence of heteroskedasticity
and serial correlation in the time series of differences in squared forecast errors, are those for
the test of equality of mean-square error between the SPF and the benchmark. The p-values are those for:
(1) The Diebold-Mariano statistic (July 1995, Journal of Business and Economic Statistics), using a
uniform lag window with the truncation lag set to the forecast horizon minus unity. When the
uniform lag window produces a nonpositive standard error, the Bartlett window is used.
(2) The Harvey-Leybourne-Newbold correction (1997, International Journal of Forecasting) to the
Diebold-Mariano statistic.
(3) The Diebold-Mariano statistic, using a Bartlett lag window with the truncation lag increased
four quarters beyond that of (1) and (2).
A RMSE ratio below unity indicates that the SPF consensus (median) forecast has a root-mean-square error
lower than that of the benchmark. This means the SPF is more accurate. We now describe the benchmark models.
The first is perhaps the simplest of all possible benchmarks: A no-change model. In this model, the forecast
for quarter T, the one-step-ahead or current-quarter forecast, is simply the historical value for the prior
quarter (T - 1). There is, in other words, no change in the forecast compared with the historical value.
Moreover, the forecast for the remaining quarters of the horizon is the same as the forecast for the current
quarter. We denote the relative RMSE ratio for this benchmark as RMSE(S/NC), using NC to indicate no change.
The second and third benchmark models generate projections using one or more historical observations of the
the variable forecast, weighted by coefficients estimated from the data. Such autoregressive (AR)
models can be formulated in two ways. We can estimate one model to generate the forecasts at all horizons,
using an iteration method to generate the projections beyond the current quarter (IAR), or we can directly
estimate a new model for each forecast horizon (DAR). The latter formulation has been shown to reduce the
bias in a forecast when the underlying model is characterized by certain types of misspecification. The
root-mean-square error ratios are denoted RMSE(S/IAR) and RMSE(S/DAR), respectively.
The one- through five-step-ahead projections of the benchmark models use information on the quarterly
average of the variable forecast. The latest historical observation is for the quarter that is one quarter
before the quarter of the first projection in the horizon. In contrast, the panelists generate their
projections with the help of additional information. They submit their projections near the middle of each
quarter and hence have access to some monthly indicators for the first month of each quarter, when those
data are released before the survey deadline. This puts the projections of panelists for some variables
at an advantage relative to the corresponding benchmark projections. Moreover, the panelists may also
examine the very recent historical values of such monthly indicators in forming their projections for
quarterly averages. Such monthly statistical momentum represents an advantage not shared by the benchmark
models, which use only quarterly averages. For survey variables whose observations are reported at a
monthly frequency, such as interest rates, industrial production, housing starts, and unemployment, we
estimate and forecast a fourth benchmark model, the DARM. This model adds recent monthly historical values
to the specification of the DAR model. For the projections for unemployment, nonfarm payroll employment,
and interest rates, we add the values of monthly observations, beginning with that for the first month
of the first quarter of the forecast horizon. These values should be in the information set of the survey
panelists at the time they formed their projections. In contrast, for variables such as housing
starts and industrial production, we include only lagged values of monthly observations. For such
variables, the panelists would not have known the monthly observation for the first month of the first
quarter of the forecast horizon. In general, we find that adding monthly observations to the benchmark
DAR models improves accuracy. Indeed, for the projections for interest rates and the unemployment rate,
the accuracy of the benchmark DARM projections rivals that of the SPF projections.
2.2. Real-Time Data.
All benchmark models are estimated on a rolling, fixed window of 60 real-time quarterly observations.
Lag lengths, based on either the Akaike information criterion (AIC) or the Schwarz information
criterion (SIC), are re-estimated each period. The tables below indicate whether the lag length was
was chosen by the AIC or SIC.
We would like to make the comparison between the SPF forecast and the forecasts of each benchmark as
fair as possible. Therefore, we must subject the benchmark models to the same data environment the
survey panelists faced when they made their projections. This is important because macroeconomic
data are revised often, and we do not want the benchmark models to use a data set that differs from the one
our panelists would have used. We estimate and forecast the benchmark models with real-time data from the
Philadelphia Fed real-time data set, using the vintage of data that the survey panelists would have had
at the time they generated their own projections. (For more information on the Philadelphia Fed
real-time data set, go to www.philadelphiafed.org/econ/forecast/real-time-data/.)
An open question in the literature on forecasting is: What version or vintage of the data should we use to
compute the errors? A closely related question is: What version of the data are professional forecasters
trying to predict? Our computations take no strong position on these questions. In Tables 1A - 1B, we
evaluate the projections (SPF and benchmark) with five alternative measures of the historical values, all
from the Philadelphia Fed real-time data set. These measures range from the initial-release values to the
values as we know them today. All together, we compute the forecast error statistics using the following
five alternative measures of historical values:
(1) The initial or first-release value;
(2) The revised value as it appears one quarter after the initial release;
(3) The revised value as it appears five quarters after the initial release;
(4) The revised value as it appears nine quarters after the initial release;
(5) The revised value as it appears today.
Each measure of the historical value has advantages and disadvantages. The initial-release value is the
first measure released by government statistical agencies. A forecaster might be very interested in this
measure because it enables him to evaluate his latest forecast soon after he generated it. However, early
releases of the data are often subject to large measurement error. Subsequent releases [(2) - (5)]
are more accurate, but they are available much later than the initial release. As we go from the first
measure to the fifth, we get more reliability, at the cost of higher delays in availability.
The last two columns in Table 1A report the number of observations that we used to compute the error
statistics. Some observations are omitted because the data are missing in the real-time data set,
such as occurred when federal government statistical agencies closed in late 1995.
2.3. Recent Projections and Realizations.
Tables 2 to 7 provide information on recent projections and realizations. They show how we align the data
prior to computing the forecast errors that form the backbone of the computations in Tables 1A - 1B. Any
error can be written as the equation given by error = realization - forecast. For our computations, we must
be more precise because, for each projection (SPF and benchmarks), we have different periods forecast (T)
different forecast horizons (h), and several measures of the realization (m). Thus, we can define the
forecast error more precisely as
error( T, h, m ) = realization( T, m ) - forecast( T, h ).
Tables 2 to 7 are organized along these lines. Table 2 shows recent forecasts from the SPF. Each column
gives the projection for a different horizon or forecast step (h), beginning with that for the current
quarter, defined as the quarter in which we conducted the survey. The dates (T) given in the rows show the
periods forecast. These also correspond to the dates that we conducted the survey. Tables 3 to 6 report the
recent projections of the four benchmark models. They are organized in the same way as Table 2. Table 7
reports recent values of the five alternative realizations (m) we use to compute the error statistics.
2.4. Qualifications.
We note two minor qualifications to the methods discussed above. The first concerns the vintage of data
that we used to estimate and forecast the benchmark models for CPI inflation. The second concerns the five
measures of realizations used for the unemployment rate, nonfarm payroll employment, and CPI inflation. To
estimate and forecast the benchmark models for CPI inflation, we use the vintage of data that would have
been available in the middle of each quarter. This postdates by one month the vintage that SPF panelists
would have had at their disposal when they formed their projections.
To compute the realizations for unemployment, nonfarm payroll employment, and CPI inflation, we use the
vintages associated with the middle of each quarter. The measure that we call initial comes from this
vintage, even though the initial estimate was available in the vintage dated one month earlier. Thus,
for these variables, our initial estimate reflects some revision by government statistical agencies.
The effect for unemployment and CPI inflation is likely small. The effect could be somewhat larger for
nonfarm payroll employment.
3. DESCRIPTION OF GRAPHS.
3.1. Root-Mean-Square Errors.
For each sample period shown in Table 1, we provide graphs of the root-mean-square error for the SPF forecast.
There is one page for each sample period. On each page, we plot (for each forecast horizon) the RMSE
on the y-axis. The x-axis shows the measure of the historical value that we used to compute the RMSE.
These range from the value on its initial release to the value one quarter later to the value as we know it
now (at the time we made the computation).
The graphs provide a tremendous amount of information. If we focus on a particular graph, we can see how
a change in the measure of the realization (x-axis) affects the root-mean-square-error measure of accuracy.
The effect is pronounced for some variables, such as real GDP and some of its components. For others,
there is little or no effect. For example, because the historical data on interest rates are not revised,
the estimated RMSE is the same in each case.
If we compare a particular point on one graph with the same point on another, we see how the forecast
horizon affects accuracy. In general, the RMSE rises (accuracy falls) as the forecast horizon lengthens.
Finally, if we compare a graph on one page with the corresponding graph on another page, we see how our
estimates of accuracy in the SPF change with the sample period. Periods characterized by a high degree of
economic turbulence will generally produce large RMSEs.
3.2. Fan Charts.
The last chart plots recent historical values and the latest SPF forecast. It also shows confidence
intervals for the forecast, based on back-of-the-envelope calculations. The historical values and
the SPF forecast are those associated with the latest vintage of data and survey, respectively,
available at the time we ran our computer programs. The confidence intervals are constructed under the
assumption that the historical forecast errors over the sample (shown in the footnote) follow a normal
distribution with a mean of zero and a variance given by the squared root-mean-square error. The latter
is estimated over the aforementioned sample, using the measure of history listed in the footnote.
4. SPECIAL TREATMENT OF COVID-19 EXTREME HISTORICAL OBSERVATIONS.
Many macroeconomic variables experienced extreme values in 2020:Q2 and 2020:Q3 due to the partial shutdown
of the U.S. economy at the end of March 2020. In some cases, these extreme values adversely affected
the estimation of parameters in the benchmark models. The effect produced unrealistic parameter values
in the models. In some cases, the models became dynamically unstable in sample periods that encompassed
the 2020:Q2 and 2020:Q3 observations, leading to distorted forecast error statistics in those benchmark
models for some survey variables. Comparisons with the SPF projections became hard to defend.
Beginning with the error statistics published following the 2022:Q1 survey, we proceed in the following
way. For some survey variables, we scale back the magnitude of the historical extreme observations for
2020:Q2 and 2020:Q3. We make these adjustment before the estimation of parameters in the benchmark models
but reverse the adjustment prior to forecasting the models. The intent is to prevent adverse, unrealistic
effects on the parameter estimates while allowing the extreme historical observations to condition the
projections. These adjustments to extreme historical observations do not change the historical realizations
used to compute forecast errors or forecast error statistics: We continue to use unadjusted, official
U.S. government data for these purposes.
The survey variables for which we make the adjustments are: Nominal GDP, Unemployment, Employment,
Industrial Production, Real GDP, and Real Personal Consumption Expenditures. We do not adjust the
historical values for the remaining variables.
_________________________________________________________________________________________________
Table 1A.
Forecast Error Summary Statistics for SPF Variable: CPI (CPI Inflation Rate)
_________________________________________________________________________________________________
Computed Over Various Sample Periods
Various Measures of Realizations
Transformation: Level
Lag Length for IAR(p), DAR(p), and DARM(p) Models: AIC
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
Last Updated: 05/24/2024 12:50
_________________________________________________________________________________________________
H ME(S) MAE(S) RMSE(S) RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) Nspf N
History: Initial Release
1997:01-2021:04
1 0.17 1.04 1.43 0.51 0.55 0.55 NA 100 100
2 0.15 1.56 2.31 0.72 0.88 0.88 NA 100 100
3 0.10 1.61 2.35 0.74 0.92 0.91 NA 100 100
4 0.04 1.61 2.35 0.71 0.94 0.95 NA 100 100
5 0.00 1.62 2.33 0.78 0.94 0.95 NA 100 100
H ME(S) MAE(S) RMSE(S) RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) Nspf N
History: One Qtr After Initial Release
1997:01-2021:04
1 0.19 1.00 1.37 0.50 0.54 0.54 NA 100 100
2 0.16 1.53 2.25 0.71 0.86 0.87 NA 100 100
3 0.11 1.58 2.29 0.73 0.91 0.90 NA 100 100
4 0.06 1.58 2.29 0.71 0.94 0.95 NA 100 100
5 0.01 1.59 2.28 0.77 0.93 0.95 NA 100 100
H ME(S) MAE(S) RMSE(S) RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) Nspf N
History: Five Qtrs After Initial Release
1997:01-2021:04
1 0.19 0.96 1.38 0.50 0.54 0.54 NA 100 100
2 0.17 1.51 2.24 0.72 0.86 0.86 NA 100 100
3 0.11 1.54 2.28 0.74 0.91 0.90 NA 100 100
4 0.06 1.54 2.28 0.71 0.94 0.95 NA 100 100
5 0.02 1.55 2.26 0.77 0.93 0.95 NA 100 100
H ME(S) MAE(S) RMSE(S) RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) Nspf N
History: Nine Qtrs After Initial Release
1997:01-2021:04
1 0.19 0.92 1.34 0.50 0.54 0.54 NA 100 100
2 0.17 1.49 2.21 0.71 0.86 0.87 NA 100 100
3 0.12 1.54 2.26 0.73 0.92 0.90 NA 100 100
4 0.06 1.52 2.25 0.71 0.93 0.95 NA 100 100
5 0.02 1.53 2.24 0.78 0.93 0.95 NA 100 100
H ME(S) MAE(S) RMSE(S) RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) Nspf N
History: Latest Vintage
1997:01-2021:04
1 0.18 0.96 1.37 0.51 0.54 0.54 NA 100 100
2 0.16 1.53 2.24 0.71 0.86 0.87 NA 100 100
3 0.10 1.58 2.29 0.74 0.92 0.91 NA 100 100
4 0.05 1.55 2.29 0.71 0.93 0.94 NA 100 100
5 0.01 1.56 2.27 0.78 0.93 0.95 NA 100 100
Notes for Table 1A.
(1) The forecast horizon is given by H, where H = 1 is the SPF forecast for the current quarter.
(2) The headers ME(S), MAE(S), and RMSE(S) are mean error, mean absolute error, and
root-mean-square error for the SPF.
(3) The header RMSE(S/NC) is the ratio of the SPF RMSE to that of the no-change (NC) model.
(4) The headers RMSE(S/IAR), RMSE(S/DAR) and RMSE(S/DARM) are the ratios of the SPF RMSE to the RMSE
of the iterated and direct autoregressive models and the direct autoregressive model augmented
with monthly observations, respectively. All models are estimated on a rolling window of 60
observations from the Philadelphia Fed real-time data set.
(5) The headers Nspf and N are the number of observations analyzed for the SPF and benchmark models.
(6) When the variable forecast is a growth rate or an interest rate, it is expressed in annualized
percentage points. When the variable forecast is the unemployment rate, it is expressed in percentage
points.
(7) Sample periods refer to the dates forecast, not the dates when the forecasts were made.
Source: Tom Stark, Research Department, FRB Philadelphia.
________________________________________________________________________________________________________
Table 1B.
Ratios of Root-Mean-Square Errors for SPF Variable: CPI (CPI Inflation Rate)
Alternative P-Values in Parentheses
________________________________________________________________________________________________________
Computed Over Various Sample Periods
Various Measures of Realizations
Transformation: Level
Lag Length for IAR(p), DAR(p), and DARM(p) Models: AIC
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
Last Updated: 05/24/2024 12:50
________________________________________________________________________________________________________
History: Initial Release
1997:01-2021:04
H RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) N1 N2 N3 N4
1 0.509 0.553 0.553 NA 100 100 100 NA
(0.010) (0.003) (0.003) ( NA )
(0.012) (0.004) (0.004) ( NA )
(0.023) (0.026) (0.026) ( NA )
2 0.721 0.877 0.882 NA 100 100 100 NA
(0.034) (0.029) (0.049) ( NA )
(0.039) (0.034) (0.055) ( NA )
(0.036) (0.064) (0.083) ( NA )
3 0.744 0.923 0.912 NA 100 100 100 NA
(0.072) (0.109) (0.153) ( NA )
(0.083) (0.121) (0.166) ( NA )
(0.080) (0.105) (0.146) ( NA )
4 0.715 0.941 0.954 NA 100 100 100 NA
(0.014) (0.061) (0.130) ( NA )
(0.020) (0.074) (0.148) ( NA )
(0.029) (0.055) (0.096) ( NA )
5 0.777 0.935 0.951 NA 100 100 100 NA
(0.016) (0.018) (0.032) ( NA )
(0.023) (0.026) (0.043) ( NA )
(0.009) (0.036) (0.060) ( NA )
History: One Qtr After Initial Release
1997:01-2021:04
H RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) N1 N2 N3 N4
1 0.502 0.537 0.537 NA 100 100 100 NA
(0.009) (0.002) (0.002) ( NA )
(0.011) (0.002) (0.002) ( NA )
(0.022) (0.022) (0.022) ( NA )
2 0.714 0.863 0.867 NA 100 100 100 NA
(0.036) (0.031) (0.047) ( NA )
(0.041) (0.036) (0.054) ( NA )
(0.037) (0.065) (0.082) ( NA )
3 0.734 0.913 0.901 NA 100 100 100 NA
(0.063) (0.131) (0.163) ( NA )
(0.073) (0.144) (0.177) ( NA )
(0.070) (0.122) (0.153) ( NA )
4 0.714 0.939 0.947 NA 100 100 100 NA
(0.011) (0.072) (0.140) ( NA )
(0.015) (0.086) (0.157) ( NA )
(0.023) (0.062) (0.106) ( NA )
5 0.769 0.931 0.950 NA 100 100 100 NA
(0.020) (0.028) (0.037) ( NA )
(0.029) (0.038) (0.049) ( NA )
(0.011) (0.039) (0.064) ( NA )
History: Five Qtrs After Initial Release
1997:01-2021:04
H RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) N1 N2 N3 N4
1 0.505 0.542 0.542 NA 100 100 100 NA
(0.014) (0.003) (0.003) ( NA )
(0.017) (0.004) (0.004) ( NA )
(0.033) (0.028) (0.028) ( NA )
2 0.723 0.859 0.863 NA 100 100 100 NA
(0.056) (0.028) (0.039) ( NA )
(0.063) (0.033) (0.045) ( NA )
(0.062) (0.060) (0.068) ( NA )
3 0.736 0.914 0.900 NA 100 100 100 NA
(0.065) (0.130) (0.158) ( NA )
(0.075) (0.143) (0.172) ( NA )
(0.073) (0.120) (0.151) ( NA )
4 0.706 0.937 0.948 NA 100 100 100 NA
(0.008) (0.073) (0.161) ( NA )
(0.012) (0.086) (0.179) ( NA )
(0.020) (0.071) (0.123) ( NA )
5 0.775 0.929 0.950 NA 100 100 100 NA
(0.022) (0.029) (0.038) ( NA )
(0.031) (0.040) (0.051) ( NA )
(0.013) (0.044) (0.071) ( NA )
History: Nine Qtrs After Initial Release
1997:01-2021:04
H RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) N1 N2 N3 N4
1 0.499 0.536 0.536 NA 100 100 100 NA
(0.017) (0.004) (0.004) ( NA )
(0.019) (0.005) (0.005) ( NA )
(0.035) (0.033) (0.033) ( NA )
2 0.712 0.860 0.866 NA 100 100 100 NA
(0.049) (0.033) (0.046) ( NA )
(0.055) (0.038) (0.053) ( NA )
(0.053) (0.067) (0.078) ( NA )
3 0.730 0.915 0.901 NA 100 100 100 NA
(0.058) (0.143) (0.157) ( NA )
(0.068) (0.156) (0.171) ( NA )
(0.065) (0.135) (0.149) ( NA )
4 0.708 0.934 0.945 NA 100 100 100 NA
(0.011) (0.087) (0.148) ( NA )
(0.016) (0.102) (0.166) ( NA )
(0.024) (0.084) (0.108) ( NA )
5 0.780 0.927 0.948 NA 100 100 100 NA
(0.022) (0.030) (0.047) ( NA )
(0.031) (0.041) (0.060) ( NA )
(0.012) (0.046) (0.085) ( NA )
History: Latest Vintage
1997:01-2021:04
H RMSE(S/NC) RMSE(S/IAR) RMSE(S/DAR) RMSE(S/DARM) N1 N2 N3 N4
1 0.507 0.544 0.544 NA 100 100 100 NA
(0.013) (0.003) (0.003) ( NA )
(0.016) (0.004) (0.004) ( NA )
(0.029) (0.030) (0.030) ( NA )
2 0.715 0.864 0.871 NA 100 100 100 NA
(0.054) (0.033) (0.048) ( NA )
(0.060) (0.038) (0.054) ( NA )
(0.057) (0.068) (0.081) ( NA )
3 0.739 0.918 0.907 NA 100 100 100 NA
(0.063) (0.144) (0.162) ( NA )
(0.073) (0.157) (0.175) ( NA )
(0.071) (0.138) (0.154) ( NA )
4 0.710 0.935 0.945 NA 100 100 100 NA
(0.010) (0.092) (0.133) ( NA )
(0.015) (0.107) (0.150) ( NA )
(0.023) (0.086) (0.093) ( NA )
5 0.779 0.927 0.949 NA 100 100 100 NA
(0.014) (0.035) (0.041) ( NA )
(0.020) (0.047) (0.054) ( NA )
(0.007) (0.051) (0.076) ( NA )
Notes for Table 1B.
(1) The forecast horizon is given by H, where H = 1 is the SPF forecast for the current quarter.
(2) The headers RMSE(S/NC), RMSE(S/IAR), RMSE(S/DAR), and RMSE(S/DARM) are the ratios of the SPF
root-mean-square error to that of the benchmark models: No-change (NC), indirect autoregression (IAR),
direct autoregession (DAR), and direct autoregression augmented with monthly information (DARM).
These statistics may differ slightly from those reported in Table 1A because they incorporate only
those observations common to both the SPF and the benchmark model. The previous statistics make use
of all available observations for each model.
(3) All models are estimated on a rolling window of 60 observations from the Philadelphia Fed real-time
data set.
(4) A set of three two-sided p-values (in parentheses) accompanies each statistic. These are the p-values
for the test of the equality of mean-square-error. The first is for the Diebold-Mariano (1995, JBES)
statistic, using a uniform lag window with the trunction lag set to the forecast horizon minus one.
(The tables report the p-values using a Bartlett window when the uniform window produces a negative
standard error.) The second is for the Harvey-Leybourne-Newbold (1997, IJF) correction to the
Diebold-Mariano statistic. The third is for the Diebold-Mariano statistic, using a Bartlett lag
window with the truncation lag increased four quarters.
(5) The headers N1, N2, N3, and N4 show the number of observations used in constructing each ratio of
root-mean-square errors.
(6) Sample periods refer to the dates forecast, not the dates when the forecasts were made.
Source: Tom Stark, Research Department, FRB Philadelphia.
______________________________________________________________________________
Table 2. Recent SPF Forecasts
(Dated at the Quarter Forecast)
______________________________________________________________________________
Variable: CPI (CPI Inflation Rate)
By Forecast Step (1 to 5)
Transformation: Level
Last Updated: 05/24/2024 12:50
______________________________________________________________________________
Qtr Forecast Step 1 Step 2 Step 3 Step 4 Step 5
2017:03 1.647 2.200 2.300 2.220 2.300
2017:04 2.300 2.318 2.300 2.450 2.215
2018:01 2.700 2.070 2.200 2.400 2.400
2018:02 1.906 2.020 2.018 2.123 2.200
2018:03 2.300 2.201 2.300 2.175 2.179
2018:04 2.336 2.300 2.290 2.227 2.100
2019:01 1.100 2.413 2.376 2.300 2.300
2019:02 2.406 2.300 2.300 2.061 2.248
2019:03 1.880 2.124 2.307 2.350 2.300
2019:04 1.800 2.088 2.100 2.200 2.400
2020:01 2.000 2.200 2.005 2.118 2.253
2020:02 -2.579 2.000 2.100 2.000 2.000
2020:03 2.337 1.500 2.200 2.154 2.054
2020:04 2.008 1.612 1.937 2.233 2.100
2021:01 2.496 2.000 1.768 2.000 2.200
2021:02 3.200 2.132 2.000 1.600 2.000
2021:03 5.200 2.567 2.086 2.100 2.100
2021:04 4.600 2.630 2.364 2.200 2.168
2022:01 5.500 3.001 2.175 2.300 2.200
2022:02 7.100 3.836 2.629 2.309 2.248
2022:03 6.681 4.531 2.749 2.463 2.427
2022:04 5.400 4.340 3.734 2.724 2.366
2023:01 3.326 4.473 3.557 3.114 2.500
2023:02 3.500 3.408 3.498 3.400 3.034
2023:03 3.068 3.200 3.050 3.100 3.000
2023:04 3.264 2.923 2.949 2.774 2.936
2024:01 2.509 2.783 2.615 2.700 2.600
2024:02 3.410 2.500 2.616 2.549 2.417
2024:03 NA 2.800 2.400 2.500 2.592
2024:04 NA NA 2.500 2.355 2.400
2025:01 NA NA NA 2.406 2.268
2025:02 NA NA NA NA 2.322
Notes for Table 2.
(1) Each column gives the sequence of SPF projections for a given forecast step. The forecast steps
range from one (the forecast for the quarter in which the survey was conducted) to four quarters
in the future (step 5).
(2) The dates listed in the rows are the dates forecast, not the dates when the forecasts were made,
with the exception of the forecast at step one, for which the two dates coincide.
Source: Tom Stark, Research Department, FRB Philadelphia.
______________________________________________________________________________________________
Table 3. Recent Benchmark Model 1 IAR Forecasts
(Dated at the Quarter Forecast)
______________________________________________________________________________________________
Variable: CPI (CPI Inflation Rate)
By Forecast Step (1 to 5)
Transformation: Level
Lag Length for IAR(p): AIC
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
Last Updated: 05/24/2024 12:50
______________________________________________________________________________________________
Qtr Forecast Step 1 Step 2 Step 3 Step 4 Step 5
2017:03 1.610 2.201 2.144 2.067 2.056
2017:04 2.077 1.988 2.169 2.138 2.067
2018:01 2.346 2.089 2.063 2.163 2.137
2018:02 2.225 2.160 2.091 2.077 2.162
2018:03 1.809 1.905 2.124 2.092 2.080
2018:04 2.162 2.141 2.034 2.117 2.092
2019:01 1.976 2.147 2.198 2.113 2.116
2019:02 1.860 2.207 2.114 2.158 2.111
2019:03 2.544 2.250 2.178 2.108 2.140
2019:04 1.829 2.050 2.168 2.127 2.112
2020:01 2.165 2.054 1.989 2.070 2.119
2020:02 1.739 2.001 2.107 2.067 2.059
2020:03 0.513 2.108 1.999 2.078 2.100
2020:04 3.691 2.674 2.103 2.030 2.060
2021:01 1.459 1.609 2.426 2.030 2.039
2021:02 2.182 1.700 1.492 1.894 2.011
2021:03 3.021 1.946 1.922 1.897 1.795
2021:04 3.125 1.033 1.910 1.917 2.002
2022:01 4.006 2.296 1.590 1.905 1.875
2022:02 5.132 2.798 2.101 2.064 1.904
2022:03 8.530 3.474 2.424 2.054 2.064
2022:04 4.002 7.166 2.797 2.308 2.043
2023:01 3.234 3.200 6.940 2.522 2.273
2023:02 3.061 2.791 2.818 6.736 2.410
2023:03 2.495 2.704 2.579 2.636 6.966
2023:04 2.860 2.396 2.534 2.479 2.549
2024:01 2.729 2.533 2.349 2.454 2.431
2024:02 3.366 2.731 2.385 2.328 2.416
2024:03 NA 3.110 2.732 2.319 2.318
2024:04 NA NA 2.962 2.732 2.289
2025:01 NA NA NA 2.875 2.732
2025:02 NA NA NA NA 2.825
Notes for Table 3.
(1) Each column gives the sequence of benchmark IAR projections for a given forecast step. The forecast
steps range from one to five. The first step corresponds to the forecast that SPF panelists
make for the quarter in which the survey is conducted.
(2) The dates listed in the rows are the dates forecast, not the dates when the forecasts were made,
with the exception of the forecast at step one, for which the two dates coincide.
(3) The IAR benchmark model is estimated on a fixed 60-quarter rolling window. Its forecasts are
computed with the indirect method. Estimation uses data from the Philadelphia Fed real-time
data set.
Source: Tom Stark, Research Department, FRB Philadelphia.
_________________________________________________________________________________________________
Table 4. Recent Benchmark Model 2 No-Change Forecasts
(Dated at the Quarter Forecast)
_________________________________________________________________________________________________
Variable: CPI (CPI Inflation Rate)
By Forecast Step (1 to 5)
Transformation: Level
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
Last Updated: 05/24/2024 12:50
_________________________________________________________________________________________________
Qtr Forecast Step 1 Step 2 Step 3 Step 4 Step 5
2017:03 -0.312 3.147 3.040 1.632 2.542
2017:04 2.014 -0.312 3.147 3.040 1.632
2018:01 3.309 2.014 -0.312 3.147 3.040
2018:02 3.508 3.309 2.014 -0.312 3.147
2018:03 1.656 3.508 3.309 2.014 -0.312
2018:04 1.996 1.656 3.508 3.309 2.014
2019:01 1.486 1.996 1.656 3.508 3.309
2019:02 0.877 1.486 1.996 1.656 3.508
2019:03 2.918 0.877 1.486 1.996 1.656
2019:04 1.789 2.918 0.877 1.486 1.996
2020:01 2.374 1.789 2.918 0.877 1.486
2020:02 1.208 2.374 1.789 2.918 0.877
2020:03 -3.530 1.208 2.374 1.789 2.918
2020:04 5.158 -3.530 1.208 2.374 1.789
2021:01 2.430 5.158 -3.530 1.208 2.374
2021:02 3.748 2.430 5.158 -3.530 1.208
2021:03 8.445 3.748 2.430 5.158 -3.530
2021:04 6.633 8.445 3.748 2.430 5.158
2022:01 7.912 6.633 8.445 3.748 2.430
2022:02 9.201 7.912 6.633 8.445 3.748
2022:03 10.531 9.201 7.912 6.633 8.445
2022:04 5.686 10.531 9.201 7.912 6.633
2023:01 4.164 5.686 10.531 9.201 7.912
2023:02 3.813 4.164 5.686 10.531 9.201
2023:03 2.709 3.813 4.164 5.686 10.531
2023:04 3.583 2.709 3.813 4.164 5.686
2024:01 2.726 3.583 2.709 3.813 4.164
2024:02 3.806 2.726 3.583 2.709 3.813
2024:03 NA 3.806 2.726 3.583 2.709
2024:04 NA NA 3.806 2.726 3.583
2025:01 NA NA NA 3.806 2.726
2025:02 NA NA NA NA 3.806
Notes for Table 4.
(1) Each column gives the sequence of benchmark no-change projections for a given forecast step.
The forecast steps range from one to five. The first step corresponds to the forecast that SPF
panelists make for the quarter in which the survey is conducted.
(2) The dates listed in the rows are the dates forecast, not the dates when the forecasts were made,
with the exception of the forecast at step one, for which the two dates coincide.
(3) The projections use data from the Philadelphia Fed real-time data set.
Source: Tom Stark, Research Department, FRB Philadelphia.
______________________________________________________________________________________________
Table 5. Recent Benchmark Model 3 DAR Forecasts
(Dated at the Quarter Forecast)
______________________________________________________________________________________________
Variable: CPI (CPI Inflation Rate)
By Forecast Step (1 to 5)
Transformation: Level
Lag Length for DAR(p): AIC
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
Last Updated: 05/24/2024 12:50
______________________________________________________________________________________________
Qtr Forecast Step 1 Step 2 Step 3 Step 4 Step 5
2017:03 1.610 2.044 2.071 2.110 2.086
2017:04 2.077 2.343 2.099 2.041 2.013
2018:01 2.346 2.107 2.202 2.056 2.216
2018:02 2.225 1.975 2.097 2.319 2.258
2018:03 1.809 1.941 2.065 2.098 1.831
2018:04 2.162 2.191 2.023 2.012 2.088
2019:01 1.976 2.135 2.164 1.999 2.246
2019:02 1.860 2.208 2.129 2.167 2.278
2019:03 2.544 2.244 2.158 2.129 2.081
2019:04 1.829 1.976 2.138 2.166 2.111
2020:01 2.165 2.103 2.040 2.178 2.054
2020:02 1.739 1.993 2.080 2.023 1.946
2020:03 0.513 2.137 2.017 2.089 2.166
2020:04 3.691 2.749 2.070 2.016 2.025
2021:01 1.459 1.401 2.222 2.078 2.060
2021:02 2.182 1.792 1.636 2.395 1.919
2021:03 3.021 1.626 1.839 1.604 1.204
2021:04 3.125 0.629 1.713 2.198 2.273
2022:01 4.006 1.447 1.350 1.699 1.914
2022:02 5.132 2.242 0.535 0.622 2.104
2022:03 8.530 2.874 1.926 1.079 2.613
2022:04 4.002 5.878 2.766 1.160 2.121
2023:01 3.234 3.252 4.809 1.376 2.878
2023:02 3.061 2.809 2.932 2.493 3.107
2023:03 2.495 2.713 2.732 2.613 3.789
2023:04 2.860 2.415 2.638 2.574 3.173
2024:01 2.729 2.550 2.397 2.533 2.834
2024:02 3.366 2.597 2.484 2.373 2.759
2024:03 NA 3.094 2.562 2.396 2.433
2024:04 NA NA 2.943 2.540 2.522
2025:01 NA NA NA 2.834 2.551
2025:02 NA NA NA NA 2.904
Notes for Table 5.
(1) Each column gives the sequence of benchmark DAR projections for a given forecast step. The forecast
steps range from one to five. The first step corresponds to the forecast that SPF panelists
make for the quarter in which the survey is conducted.
(2) The dates listed in the rows are the dates forecast, not the dates when the forecasts were made,
with the exception of the forecast at step one, for which the two dates coincide.
(3) The DAR benchmark model is estimated on a fixed 60-quarter rolling window. Its forecasts are
computed with the direct method. Estimation uses data from the Philadelphia Fed real-time
data set.
Source: Tom Stark, Research Department, FRB Philadelphia.
______________________________________________________________________________________________
Table 6. Recent Benchmark Model 4 DARM Forecasts
(Dated at the Quarter Forecast)
______________________________________________________________________________________________
Variable: CPI (CPI Inflation Rate)
By Forecast Step (1 to 5)
Transformation: Level
Lag Length for DARM(p): AIC
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
Last Updated: 05/24/2024 12:50
______________________________________________________________________________________________
Qtr Forecast Step 1 Step 2 Step 3 Step 4 Step 5
2017:03 NA NA NA NA NA
2017:04 NA NA NA NA NA
2018:01 NA NA NA NA NA
2018:02 NA NA NA NA NA
2018:03 NA NA NA NA NA
2018:04 NA NA NA NA NA
2019:01 NA NA NA NA NA
2019:02 NA NA NA NA NA
2019:03 NA NA NA NA NA
2019:04 NA NA NA NA NA
2020:01 NA NA NA NA NA
2020:02 NA NA NA NA NA
2020:03 NA NA NA NA NA
2020:04 NA NA NA NA NA
2021:01 NA NA NA NA NA
2021:02 NA NA NA NA NA
2021:03 NA NA NA NA NA
2021:04 NA NA NA NA NA
2022:01 NA NA NA NA NA
2022:02 NA NA NA NA NA
2022:03 NA NA NA NA NA
2022:04 NA NA NA NA NA
2023:01 NA NA NA NA NA
2023:02 NA NA NA NA NA
2023:03 NA NA NA NA NA
2023:04 NA NA NA NA NA
2024:01 NA NA NA NA NA
2024:02 NA NA NA NA NA
2024:03 NA NA NA NA NA
2024:04 NA NA NA NA NA
2025:01 NA NA NA NA NA
2025:02 NA NA NA NA NA
Notes for Table 6.
(1) Each column gives the sequence of benchmark DARM projections for a given forecast step. The forecast
steps range from one to five. The first step corresponds to the forecast that SPF panelists
make for the quarter in which the survey is conducted.
(2) The dates listed in the rows are the dates forecast, not the dates when the forecasts were made,
with the exception of the forecast at step one, for which the two dates coincide.
(3) The DARM benchmark model is estimated on a fixed 60-quarter rolling window. Its forecasts are
computed with the direct method and incorporate recent monthly values of the dependent variable.
Estimation uses data from the Philadelphia Fed real-time data set.
Source: Tom Stark, Research Department, FRB Philadelphia.
______________________________________________________________________
Table 7. Recent Realizations (Various Measures)
Philadelphia Fed Real-Time Data Set
______________________________________________________________________
Variable: CPI (CPI Inflation Rate)
Transformation: Level
Source for Historical Realizations: Bureau of Labor Statistics via Haver Analytics
Last Updated: 05/24/2024 12:50
Column (1): Initial Release
Column (2): One Qtr After Initial Release
Column (3): Five Qtrs After Initial Release
Column (4): Nine Qtrs After Initial Release
Column (5): Latest Vintage
_______________________________________________________________________
Obs. Date (1) (2) (3) (4) (5)
2017:03 2.014 2.126 2.153 2.156 1.926
2017:04 3.309 3.309 3.142 3.119 3.220
2018:01 3.508 3.508 3.237 3.250 3.413
2018:02 1.656 1.656 2.149 2.195 2.195
2018:03 1.996 2.010 2.078 1.632 1.617
2018:04 1.486 1.486 1.300 1.572 1.638
2019:01 0.877 0.877 0.918 0.710 1.072
2019:02 2.918 2.918 3.027 3.501 2.971
2019:03 1.789 1.821 1.288 1.484 1.326
2019:04 2.374 2.374 2.630 2.458 2.839
2020:01 1.208 1.208 0.997 1.298 1.371
2020:02 -3.530 -3.530 -3.101 -3.359 -3.721
2020:03 5.158 4.680 4.794 4.642 4.628
2020:04 2.430 2.430 2.241 2.815 2.819
2021:01 3.748 3.748 4.119 4.185 4.076
2021:02 8.445 8.445 8.187 7.519 7.727
2021:03 6.633 6.716 6.606 6.507 6.507
2021:04 7.912 7.912 8.807 8.761 8.761
2022:01 9.201 9.201 9.180 NA 9.117
2022:02 10.531 10.531 9.657 NA 10.018
2022:03 5.686 5.545 5.317 NA 5.317
2022:04 4.164 4.164 4.028 NA 4.028
2023:01 3.813 3.813 NA NA 3.754
2023:02 2.709 2.709 NA NA 3.040
2023:03 3.583 3.428 NA NA 3.428
2023:04 2.726 2.726 NA NA 2.726
2024:01 3.806 NA NA NA 3.806
2024:02 NA NA NA NA NA
2024:03 NA NA NA NA NA
2024:04 NA NA NA NA NA
2025:01 NA NA NA NA NA
2025:02 NA NA NA NA NA
Notes for Table 7.
(1) Each column reports a sequence of realizations from the Philadelphia Fed real-time data set.
(2) The date listed in each row is the observation date.
(3) Moving across a particular row shows how the observation is revised in subsequent releases.
Source: Tom Stark, Research Department, FRB Philadelphia.