The views expressed in these papers are solely those of the authors and should not be interpreted as reflecting the views of the Federal Reserve Bank of Philadelphia or Federal Reserve System.
01-1: An Exploration of the Effects of Pessimism and Doubt on Asset Returns by Andrew B. Abel
The subjective distribution of growth rates of aggregate consumption is characterized by pessimism if it is first-order stochastically dominated by the objective distribution. Uniform pessimism is a leftward translation of the objective distribution of the logarithm of the growth rate. The subjective distribution is characterized by doubt if it is mean-preserving spread of the objective distribution. Pessimism and doubt both reduce the riskfree rate and thus can help resolve the riskfree rate puzzle. Uniform pessimism and doubt both increase the average equity premium and thus can help resolve the equity premium puzzle.
01-2: Will Bequests Attenuate the Predicted Meltdown in Stock Prices When Baby Boomers Retire? by Andrew B. Abel
Jim Poterba finds that consumers do not spend all of their assets during retirement, and he projects that the demand for assets will remain high when the baby boomers retire. Based on his forecast of continued high demand for capital, Poterba rejects the asset market meltdown hypothesis, which predicts a fall in stock prices when the baby boomers retire.
The author develops a rational expectations general equilibrium model with a bequest motive and an aggregate supply curve for capital. In this model, a baby boom generates an increase in stock prices, and stock prices are rationally anticipated to fall when the baby boomers retire, even though, as emphasized by Poterba, consumers do not spend all of their assets during retirement. This finding contradicts Poterba's conclusion that continued high demand for assets by retired baby boomers will prevent a fall in the price of capital.
Superseded by Working Paper 05-14.
In this study, the authors show that during the postwar era, the United States experienced a decline in the share of urban employment accounted for by the relatively dense metropolitan areas and a corresponding rise in the share of relatively less dense ones. This trend, which the authors call employment deconcentration, is distinct from the other well-known regional trend, namely, the postwar movement of jobs and people from the frostbelt to the sunbelt. The authors also show that deconcentration has been accompanied by a similar trend within metropolitan areas, wherein employment share of the denser sections of MSAs has declined and that of the less dense sections risen. The authors provide a general equilibrium model with density-driven congestion costs to suggest an explanation for employment deconcentration.
01-5: Optimal Monetary Policy by Aubhik Khan, Robert G. King, and Alexander L. Wolman
Optimal monetary policy maximizes welfare, given frictions in the economic environment. Constructing a model with two sets of frictions — the Keynesian friction of costly price adjustment by imperfectly competitive firms and the Monetarist friction of costly exchange of wealth for goods — the authors find optimal monetary policy is governed by two familiar principles.
First, the average level of the nominal interest rate should be sufficiently low, as suggested by Milton Friedman, that there should be deflation on average. Yet, the Keynesian frictions imply that the optimal nominal interest rate is positive.
Second, as various shocks occur to the real and monetary sectors, the price level should be largely stabilized, as suggested by Irving Fisher, albeit around a deflationary trend path. (In modern language, there is only small "base drift" for the price level path.) Since expected inflation is roughly constant through time, the nominal interest rate must therefore vary with the Fisherian determinants of the real interest rate — as there is expected growth or contraction of real economic activity.
01-6/R: Explaining the Dramatic Changes in Performance of U.S. Banks: Technological Change, Deregulation, and Dynamic Changes in Competition by Allen N. Berger and Loretta J. Mester
The authors investigate the effects of technological change, deregulation, and dynamic changes in competition on the performance of U.S. banks. The authors' most striking result is that during 1991–1997, cost productivity worsened while profit productivity improved substantially, particularly for banks engaging in mergers. The data are consistent with the hypothesis that banks tried to maximize profits by raising revenues as well as reducing costs. Banks appeared to provide additional or higher quality services that raised costs but also raised revenues by more than the cost increases. The results suggest that methods that exclude revenues when assessing performance may be misleading.
01-7: Banking and Finance in Argentina in the Period 1900-35 by Leonard Nakamura and Carlos E. J. M. Zarazaga
From 1900–35, Argentina evolved from an economy highly dependent on external, primarily British, finance to one more nearly self-sufficient. The authors examine the failure of domestic finance to adequately fill the void left by the decline of London and the breakdown of the world financial system in the interwar period, when neither the Buenos Aires Bolsa nor the private domestic banks developed rapidly enough to fully replace British investors as efficient channels for financing private investment. One consequence is that Argentine investable funds were increasingly concentrated in a single institution, the Banco de la Nacion Argentina (BNA), creating a lopsided financial structure that was vulnerable to rent seeking and to authoritarian capture. Nevertheless, several measures, including gold reserves, interest rates, money supply, bank credit, and the market capitalization of domestic corporations, attest to the very high level of financial development achieved by Argentina.
Until the end of 1977, the method used to measure changes in rent of primary residence in the U.S. consumer price index (CPI) tended to omit price changes when units changed tenants or were temporarily vacant. Since such units typically had more rapid increases in rents than average units, omitting them biased inflation estimates downward. Beginning in 1978, the Bureau of Labor Statistics (BLS) implemented a series of methodological changes that reduced this bias. The authors use data from the American Housing Survey to check the success of the corrections. They compare estimates of the historical series adjusted for the BLS changes in methodology with a new hedonic estimate of changes in rental rates. The authors conclude that from 1940 to 1977 the CPI for rent would have been about 60 percent higher if current BLS practices had been used — between 1.3 and 3.5 percentage points. Even after the corrections have been made, the authors' hedonic estimates suggest that the current CPI methodology may still understate the rental inflation rate by one-half to 1 percentage point.
01-9 : A Quantitative Analysis of Oil-Price Shocks, Systematic Monetary Policy, and Economic Downturns by Sylvain Leduc and Keith Sill
Are the recessionary consequences of oil-price shocks due to oil-price shocks themselves or to contractionary monetary policies that arise in response to inflation concerns engendered by rising oil prices? Can systematic monetary policy be used to alleviate the consequences of oil shocks on the economy? This paper builds a dynamic general equilibrium model of monopolistic competition in which oil and money matter to study these questions. The economy's response to oil-price shocks is examined under a variety of monetary policy rules in environments with flexible and sticky prices. The authors find that easy-inflation policies amplify the negative output response to positive oil shocks and that systematic monetary policy accounts for up to two-thirds of the fall in output. On the other hand, the authors show that a monetary policy that targets the (overall) price level substantially alleviates the impact of oil-price shocks.
01-10: Forecasting with a Real-Time Data Set for Macroeconomists by Tom Stark and Dean Croushore
This paper discusses how forecasts are affected by the use of real-time data rather than latest-available data. The key issue is this: In the literature on developing forecasting models, new models are put together based on the results they yield using the data set available to the model’s developer. But those are not the data that were available to a forecaster in real time. How much difference does the vintage of the data make for such forecasts? The authors explore this issue with a variety of exercises designed to answer this question. In particular, they find that the use of real-time data matters for some forecasting issues but not for others. It matters for choosing lag length in a univariate context. Preliminary evidence suggests that the span — or number — of forecast observations used to evaluate models may also be critical: the authors find that standard measures of forecast accuracy can be vintage-sensitive when constructed on the short spans (five years of quarterly data) of data sometimes used by researchers for forecast evaluation. The differences between using real-time and latest-available data may depend on what is being used as the “actual” or realization, and the authors explore several alternatives that can be used. Perhaps of most importance, the authors show that measures of forecast error, such as root-mean-squared error and mean absolute error, can be deceptively lower when using latest-available data rather than real-time data. Thus, for purposes such as modeling expectations or evaluating forecast errors of survey data, the use of latest-available data is questionable; comparisons between the forecasts generated from new models and benchmark forecasts, generated in real time, should be based on real-time data.
01-11: A Quantitative Welfare Analysis of the Trade-Off Between the Current Regime and Macroeconomic Stabilization by Luca Dedola and Sylvain Leduc
01-12: Expectations and the Effects of Monetary Policy by Laurence Ball and Dean Croushoure
This paper examines the predictive power of shifts in monetary
policy, as measured by changes in the real federal funds rate, for output,
inflation, and survey expectations of these variables. The authors find that
policy shifts have larger effects on actual output than on expected output;
thus policy predicts errors in output expectations, a violation of rational
expectations. Policy shifts do not predict errors in inflation expectations.
The authors explain these results with a model in which agents systematically
underestimate the effects of policy on aggregate demand. This model helps to
explain the real effects of policy.
To qualify for a patent, an invention must be new, useful, and nonobvious. This paper presents a model of sequential innovation in which industry structure is endogenous and a standard of patentability determines the proportion of all inventions that qualify for protection. There is a unique patentability standard, or inventive step, that maximizes the rate of innovation by maximizing the number of firms engaged in R&D. Surprisingly, this standard is more stringent for industries disposed to innovate rapidly. If a single standard is applied to heterogenous industries, it will encourage entry, and therefore innovation, in some industries while discouraging it in others. The model suggest a number of important implications for patent policy.
01-14: Knowledge Spillovers and the New Economy of Cities by Gerald Carlino, Satyajit Chatterjee, and Robert Hunt
Superseded by Working Paper 06-14.
This paper argues that the rate of intangible investment — investment in the development and marketing of new products — accelerated in the wake of the electronics revolution in the 1970s. The paper presents preliminary direct and indirect empirical evidence that U.S. private firms currently invest at least $1 trillion annually in intangibles. This rate of investment roughly equals U.S. gross investment in nonresidential tangible assets. It also suggests that the capital stock of intangibles in the U.S. has an equilibrium market value of at least $5 trillion.
01-16: The Pitfalls of Discretionary Monetary Policy by Aubhik Khan, Robert G. King, and Alexander L. Wolman
In a canonical staggered pricing model, monetary discretion leads to multiple private sector equilibria. The basis for multiplicity is a form of policy complementarity. Specifically, prices set in the current period embed expectations about future policy, and actual future policy responds to these same prices. For a range of values of the fundamental state variable — a ratio of predetermined prices — there is complementarity between actual and expected policy, and multiple equilibria occur. Moreover, this multiplicity is not associated with reputational considerations: It occurs in a two-period model.