The views expressed in these papers are solely those of the authors and should not be interpreted as reflecting the views of the Federal Reserve Bank of Philadelphia or Federal Reserve System.
99-1: What Explains the Dramatic Changes in Cost and Profit Performance of the U.S. Banking Industry? by Allen N. Berger and Loretta J. Mester
The authors investigate the sources of recent changes in the performance of U.S. banks using concepts and techniques borrowed from the cross-section efficiency literature. Their most striking result is that during 1991–1997, cost productivity worsened while profit productivity improved substantially, particularly for banks engaging in mergers. The data are consistent with the hypothesis that banks tried to maximize profits by raising revenues as well as reducing costs, and that banks provided additional services or higher service quality that raised costs but also raised revenues by more than the cost increases. The results suggest that methods that exclude revenues may be misleading.
The authors compute the potential economic benefits that would accrue to a typical pre-WWII era U.S. worker from the post-WWII macroeconomic policy regime. The authors assume that workers face undiversifiable income risk but can self-insure by saving in nominal assets. The worker's average utility is computed for two eras: pre-WWII (1875–1941) and post-WWII. In the pre-WWII era, the worker endured business cycles that were large in amplitude and quite volatile, a procyclical aggregate price level with large cyclical amplitude, a high average unemployment rate, and virtually no trend in the aggregate price level. In the post-WWII era, the same worker would have encountered business cycles with smaller amplitude and less volatility, a countercyclical aggregate price level with small cyclical amplitude, a much lower mean unemployment rate, and a positive trend in the aggregate price level. Depending on what is assumed about the effects of macroeconomic policies on the mean and variance of the unemployment rate, the potential gain in the worker's welfare ranges between -0.9 (if policies affected the inflation rate but not the mean or variance of the aggregate unemployment rate) to 4.19 percent of consumption (if policies affected the inflation rate and lowered the mean and variance of the aggregate unemployment rate).
U.S. patent law protects only inventions that are nontrivial advances of the prior art. The legal requirement is called nonobviousness. During the 1980s, the courts relaxed the nonobviousness requirement for all inventions, and a new form of intellectual property, with a weaker nonobviousness requirement, was created for semiconductor designs. Supporters of these changes argue that a less stringent nonobviousness requirement encourages private research and development (R&D) by increasing the probability that the resulting discoveries will be protected from imitation. This paper demonstrates that relaxing the standard of nonobviousness creates a tradeoff — raising the probability of obtaining a patent, but decreasing its value. The author shows that weaker nonobviousness requirements can lead to less R&D activity, and this is more likely to occur in industries that rapidly innovate.
This paper presents the concept and uses of a real-time data set that can be used by economists for testing the robustness of published econometric results, for analyzing policy, and for forecasting. The data set consists of vintages, or snapshots, of the major macroeconomic data available at quarterly intervals in real time. The paper illustrates why such data may matter, explains the construction of the data set, examines the properties of several of the variables in the data set across vintages, examines key empirical papers in macroeconomics and investigates their robustness to different vintages, looks at how policy analysis may be affected by data revisions, and shows how forecasts can be affected by data revisions.
99-5: Quantitative Asset Pricing Implications of Endogenous Solvency Constraints by Fernando Alvarez and Urban J. Jermann
The authors study the asset pricing implications of an economy where solvency constraints are determined to efficiently deter agents from defaulting. The authors present a simple example for which efficient allocations and all equilibrium elements are characterized analytically. The main model produces large equity premia and risk premia for long-term bonds with low risk aversion and a plausibly calibrated income process. The authors characterize the deviations from independence of aggregate and individual income uncertainty that produce equity and term premia.
99-6: Exchange Rates, Monetary Policy Regimes, and Beliefs by Keith Sill and Jeff Wrase
The authors investigate an international monetary business-cycle model in which agents face monetary policy processes that incorporate regime shifts. In any given period agents cannot directly observe the policy regime, but instead form beliefs that are updated via Bayesian learning. As a result, expectation adjustment displays inertia that adds persistence to the effects of monetary shocks. Monetary policy process for the U.S. and an aggregate of OECD countries are estimated using Hamilton's Markov-switching model. The authors then solve and calibrate a version of the model and examine its quantitative properties.
99-7: A Bayesian VAR Forecasting Model for the Philadelphia Metropolitan Area by Theodore M. Crone and Michael P. McLaughlin
Vector-autoregression (VAR) forecast models have been developed for many state economies, including the three states in the Third Federal Reserve District — Pennsylvania, New Jersey, and Delaware. This paper extends that work by developing a Bayesian VAR forecast model for the Philadelphia metropolitan area and the city of Philadelphia.
99-8: Financial Contracts and the Legal Treatment of Informed Investors by Mitchell Berlin and Loretta J. Mester
The authors explore the economic rationale for equitable subordination, a legal doctrine that permits a firm's claimants to seek to subordinate an informed investor's financial claim in bankruptcy court. Fear of equitable subordination is often cited as a reason that banks in the U.S. are wary of taking an active management role in their borrowing firms. The authors show that an optimally designed menu of claims for a large investor will include features that resemble equitable subordination. The authors' model provides a partial rationale for a financial system in which powerful creditors do not generally hold blended debt and equity claims.
Recent papers have questioned the accuracy of the Bureau of Labor Statistics' methodology for measuring implicit rents for owner-occupied housing. The authors propose cross-checking the BLS statistics by using data on owner-occupied and rental housing from the American Housing Survey. A hedonic approach that explicitly calculates capitalization rates appears to be a feasible one for developing a methodologically consistent measure of the rental cost of owner-occupied housing.
A key finding to emerge from this study is that the widely studied suburbanization or decentralization of employment and population is only part of the story of postwar urban evolution. Another important part of the story is a postwar trend of relatively faster growth of jobs and people in the smaller and less dense MSAs (deconcentration). The authors find that postwar growth in employment (and to a lesser extent population) has favored metropolitan areas with smaller levels of employment (population) density. These trends are shared by major regions of the country and by manufacturing and nonmanufacturing employment. The fact that employment growth has favored MSAs with smaller levels of employment (or lower levels of employment density) indicates that economic processes favoring convergent (as opposed to parallel) metropolitan growth played an important role in the postwar era.
99-11: Financial Development and Economic Growth by Aubhik Khan
The author develops a theory of financial development based on the costs associated with the provision of external finance. These costs are assumed to arise within an environment where informational asymmetries between borrowers and lenders are costly to resolve. When borrowing is limited, producers with access to financial intermediary loans obtain higher returns to investment than other producers. This creates incentives for others to undertake the technology adoption necessary to access investment loans. Over time, as increasing numbers of producers gain access to external finance, borrowers' net worth rises relative to debt. This reduces the costs of financial intermediation and raises the overall return on investment. The theory is consistent with recent evidence that financial development reduces the costs associated with the provision of external finance and increases the rate of economic growth. Furthermore, the theory predicts that financial development raises the return on loans and reduces the spread between borrowing and lending rates.
99-12: Growth and Risk-Sharing with Private Information by Aubhik Khan and B. Ravikumar
The author examines the impact of incomplete risk-sharing on growth and welfare. The source of market incompleteness in the economy is private information: a household's idiosyncratic productivity shock is not observable by others. Risk-sharing between households occurs through long-term contracts with intermediaries. The author finds that incomplete risk-sharing tends to reduce the rate of growth relative to the complete risk-sharing benchmark. Numerical examples indicate that the welfare cost and the growth effect of private information are small.
99-13: Exchange Rates and Monetary Policy Regimes in Canada and the U.S. by Keith Sill and Jeffrey Wrase
This paper examines monetary regime switching in Canada and the United States and the implications of regime switching for exchange rates and key nominal and real macroeconomic aggregates for the two countries. Evidence of Markov regime switching in the process governing monetary base growth and in the bilateral exchange rate between the two countries is presented. Given this evidence, a two-country general equilibrium monetary model is constructed to account for observed properties of the U.S.-Canadian dollar exchange rate and for measured effects of monetary policy on key variables. Agents in the model face a monetary policy process with regime switching and form beliefs about regimes and money growth using observations and Bayesian learning. With the driving process for money growth rates parameterized using estimates from U.S. and Canadian data, quantitative implications of the model for behaviors of exchange rates and other key variables are examined. The findings are that inclusion of learning by agents contributes somewhat to the model's ability to account for persistence in effects of money shocks on variables, provided that the shocks themselves are persistent; inclusion of learning contributes little in accounting for business cycle fluctuations and exchange rate variability; inclusion of a nonlinear driving process for money growth rates is important for the model to account for long swings in exchange rates; inclusion of learning adds only slightly to the ability of the model to account for long swings. The importance of nonlinearities in the driving process and the relative lack of importance of learning are consistent with other findings in the literature of learning effects in the face of regime switches.
No abstract available.
This paper illustrates the use of a real-time data set for forecasting. The data set consists of vintages, or snapshots, of the major macroeconomic data available at quarterly intervals in real time. The paper explains the construction of the data set, examines the properties of several of the variables in the data set across vintages, and shows how forecasts can be affected by data revisions.
99-16: On Exchange Rate Regimes, Exchange Rate Fluctuations, and Fundamentals by Luca Dedola and Sylvain Leduc
The authors develop a two-country, two-sector general equilibrium business cycle model with nominal rigidities featuring deviations from the law of one price. The paper shows that a model with these features can quantitatively account for the empirical fact that of the statistical properties of most macroeconomic variables, only the volatility of the real and nominal exchange rates has dramatically changed after the fall of the Bretton Woods system. In particular, the authors replicate some explicit nonstructural tests proposed in the literature with simulated data from their artificial economy. The authors find that while the variability of observed fundamentals (e.g., output, money supply, and interest rates) is barely affected by the exchange rate regime, that of the exchange rate increases substantially under flexible rates.
99-17: Regime-Switching in Expectations Over the Business Cycle by Gwen Eudey and Roberto Perli
In this paper, the authors argue that a plausible reason why output and other major U.S. macroeconomic time series seem to follow a Markov switching process might be strictly related to expectations. The authors show that a time series of expectations of future output from the Survey of Professional Forecasters is the only one among the many they analyze that has switching properties compatible with those of output. Starting from this empirical evidence the authors present a business cycle model with shocks to expectations (sunspots) that produces time series with the same properties as the U.S. data.
99-18: Competitive Theories for Economies with General Transactions Technology by Satyajit Chatterjee and Dean Corbae
In this paper, the authors describe and compare two approaches to analyzing transactions costs in a general equilibrium setting. In the first approach, which the authors label the transactions costs approach, the commodity space is the same as that used in models without transactions costs. In the second approach, which the authors label the valuation equilibrium approach, the commodity space is chosen so that the exchange problem can be formulated as an instance of the abstract exchange model described in Debreu (1954). The authors argue that the valuation equilibrium approach provides a tractable framework for quantitative studies of the effects of transactions costs on economy-wide resource allocation.
99-19: Using State Indexes to Define Economic Regions in the U.S. by Theodore M. Crone
When regional economists study the interaction of multistate regions in the U.S., they typically use the regional divisions developed by the U.S. Bureau of the Census or the Bureau of Economic Analysis (BEA). The current census divisions were adopted in 1910 and divide the states into nine regional groups for the presentation of data. Since the 1950s, the BEA has grouped the states into eight regions based primarily on cross-sectional similarities in their socioeconomic characteristics. The BEA definition of regions is perhaps the most frequently used grouping of states for economic analysis.
Since many economic studies of regions concentrate on similarities and differences in regional business cycles, it seems appropriate to group states into regions based on some common cyclical behavior. This paper explores the possibility of grouping states into regions based on common movements in state indexes of economic activity. These state indexes are variants of the coincident index developed by James Stock and Mark Watson for the U.S. economy.
The author has applied cluster analysis to the monthly changes in these economic activity indexes to group the states into regions with similar business cycles. He has identified six distinct regions consisting of contiguous states with similar monthly changes in their economic activity indexes.
99-20: Schumpeterian Growth and Endogenous Business Cycles by Kerk Phillips and Jeffrey Wrase
This paper contains a dynamic general equilibrium model with an endogenous process for growth and business cycles driven partly by technological discovery and diffusion. The model integrates two branches of the literature. One is literature on Schumpeterian, or "quality ladder," models, in which growth is driven endogenously by attempts to innovate in order to capture monopoly rents and in which the focus is on low-frequency fluctuations in variables. The other is the real business cycle literature, in which the focus is on high-frequency fluctuations driven by exogenous productivity shocks. The model in this paper has Schumpeterian-style low-frequency fluctuations stemming from technological discovery in the form of random successes in endogenous research and development efforts. Diffusion of innovations in applied research into basic know-how, along with random shocks to productivity, drives high-frequency fluctuations. Properties of high- and low-frequency fluctuations in data drawn from simulations of a parameterized version of the model are compared to like properties of data drawn from the postwar U.S. economy. The model accounts for key properties of actual data without heavy reliance on the exogenous, highly persistent, and volatile shocks to productivity typically used in real business cycle analysis.
99-21: A Real-Time Data Set for Macroeconomists: Does the Data Vintage Matter? by Dean Croushore and Tom Stark
This paper presents a real-time data set that can be used by economists for testing the robustness of published econometric results, for analyzing policy, and for forecasting. The data set consists of vintages, or snapshots, of the major macroeconomic data available at quarterly intervals in real time. The paper illustrates why such data may matter, explains the construction of the data set, examines the properties of several of the variables in the data set across vintages, and examines key empirical papers in macroeconomics, investigating their robustness to different vintages.
The purpose of this paper is to provide a new framework to analyze the potential role of the federal tax treatment of housing in the patterns of metropolitan development. The framework the author uses to address the issue has a very different focus from that of the basic urban model. Following the work of Voith and Gyourko (1998), the author develops an equilibrium model of two communities, one of which has fixed boundaries and the other does not. The author calls the fixed boundary community the city and the unbounded community the suburb. Individuals in these communities are assumed to have similar systematic tastes over housing and community amenities, but they also have an idiosyncratic preference for either the city or the suburb. For a given individual, the relative attractiveness of the city and the suburbs depends on his or her idiosyncratic taste, the relative amenities of the city and suburbs, and the relative price. Community amenities are endogenously determined and are assumed to depend on the distribution of high and low income individuals. High concentrations of low income residents in a community potentially can adversely affect the attractiveness of the community. Within this framework, the author examines the residential choices of high and low income individuals with and without zoning constraints. Given these outcomes, the author evaluates the relative profitability of communities choosing exclusionary zoning or not by comparing the aggregate land values under both regimes.
In this framework, the author shows that housing-related tax incentives are likely to create incentives for suburban communities to enact exclusionary zoning. To the extent that these incentives actually result in more exclusionary zoning, it reinforces the marginal effects on decentralization and sorting that result from the tax code’s effects on individuals’ choices regarding land consumption and residential location. This is an important result because it suggests that the spatial and sorting impact of the tax treatment of housing may be larger than its effects on individuals’ choices of residential location and housing consumption alone. In fact, under reasonable parameterizations, the tax incentives can result in large changes in equilibrium land prices, community choices, and community characteristics.