> > > >
Presented by Anthony M.
Federal Reserve Bank of Philadelphia
National Economists Club
April 7, 2005
It is a pleasure to be here today to talk to the National Economists Club. I welcome this opportunity to share my thoughts on the difficult task of conducting monetary policy for the U.S. In particular today, I would like to focus my comments on how policymaking is affected by both the availability and reliability of information on how well the economy is performing. At the outset, I want to emphasize that my remarks reflect my own thoughts on the subject and do not necessarily reflect the views of the Federal Open Market Committee.
We all know that monetary policy responds to economic circumstances and hence to incoming economic data. Therefore, I think it is important every once in a while to take a closer look at what we know about the data we rely on and, simultaneously, what we do not know about the economy from the information we have available.
Please keep in mind, however, this is more than a philosophical discussion; after all is said and done we must conduct monetary policy. So I would like to focus my remarks here on what conducting real-time monetary policy is like in a world with less than perfect information about the economy we are attempting to affect.
At the outset, I want to reinforce my view that appropriate monetary policymaking requires attention to long-run goals, not just short-term dynamics. Or to state it another way, our short-run actions must take account of our long-run objectives if we are to be prudent and successful central bankers.
The most important long-run goal of good monetary policy is straightforward enough: a responsible central bank must guarantee price stability. Price stability is crucial to a well-functioning market economy. Prices are signals to market participants. A stable overall price level allows people to see shifts in relative prices clearly and adjust their decisions about spending, saving, working, and investing optimally. Inflation, by contrast, jumbles and distorts price signals and generates suboptimal economic decisions.
For the past 25 years, the Fed has focused on the goal of price stability and has been relatively successful in achieving it. We took the economy from the double-digit inflation of the late 1970s to a current core inflation rate in the 1 to 2 percent range. This is price stability, that is, inflation low enough to no longer significantly influence economic decisions.
Equally important, as the relatively low level of market interest rates attests, we have succeeded in reducing inflation expectations over the past 15 years. Although some measures of near-term inflation expectations have shown an uptick over the past month, longer-run expectations are that inflation will remain well contained. This stability in long-run expectations is shown in both the 10-year inflation compensation measures based on TIPS (Treasury inflation-protected securities) and in our own Bank's Survey of Professional Forecasters. In our survey, long-term inflation expectations, measured as the average rate of change in the CPI (consumer price index) over the next 10 years, have held steady at 2.5 percent since early 1999.
Maintaining confidence in sustained price stability is crucial to fostering the most productive saving and investment decisions. In addition, it affords the Federal Reserve considerably more latitude to take short-run policy actions to help stabilize economic performance.
As you all know, the Federal Reserve is charged with setting monetary policy so as to meet our dual mandate of maintaining price stability and ensuring maximum sustainable output growth. When the economy is weak, monetary policy generally needs to be accommodative, and when the economy is growing strongly, policy needs to be tighter. In this way policy remains consistent with underlying economic fundamentals. It is entirely appropriate and consistent with our long-term goals for monetary policy to be countercyclical as long as we remain cognizant of the inflationary environment.
But we must all recognize that a central bank's power is limited. One thing we have learned — and it has been an expensive lesson — is that the best the Fed can do is cushion the economy. It cannot in and of itself force stronger growth than the economy is capable of delivering. Trying to push an economy beyond its potential may temporarily accelerate growth, but it also creates imbalances and increases inflationary pressures that must be addressed, and so boom leads to bust.
I believe that the Fed's policy over the past 25 years has demonstrated both its commitment to, and the value of, a stable price environment. Looking ahead, I am confident the Fed will take policy actions consistent with economic fundamentals and keep its focus on the longer-run objectives.
That said, I do not deny that conducting a successful monetary policy presents plenty of real-world challenges. It requires an evaluation of where the economy is, where it is going, and where it should be going. Therefore, the appropriate conduct of real time monetary policy requires policymakers to gauge how strong or weak the economy is at any moment in time, what its most likely trajectory appears to be, and how that trajectory aligns with its long-run potential.
This requires a detailed appraisal of data and, importantly, of real-time data on the current state of the economy. Unfortunately, these data often give very noisy signals of what is really going on. This is what I wish to explore in my comments today in some detail.
So where does one start? As a policymaker one must first assess where one wants the economy to go. For the U.S. central bank, as I noted above, the goals of monetary policy have been made explicit in relevant legislation. The Federal Reserve is to maintain price stability and ensure maximum sustainable output growth.
The first challenge is to quantify each of these important objectives: What is the highest growth rate for output that is sustainable? What rate of unemployment represents full utilization of labor? What rate of inflation represents price stability? These are tremendously difficult concepts to pin down. Economists have taken many different approaches to establishing numerical guideposts for economic performance, but, as I will illustrate, these guideposts are very difficult to estimate in practice.
There is general agreement in macroeconomics that the relevant measure of real activity for policymaking is the so-called “output gap” between the level of actual output and an underlying level of potential output. This gap is important in that it not only provides an output objective, but it also provides information about possible future inflationary pressures. If the economy were to grow faster than the growth in potential output for a sustained period of time, inflation would be expected to accelerate over time. By contrast, economic growth slower than potential would lead to less than full employment.
But what is this level of potential GDP (gross domestic product), and how fast does it grow? Recent theoretical work suggests that this notional benchmark should be the level of output that occurs when all wages and prices are flexible and the economy fully adjusts to balance supply and demand in all markets. This is a reasonable concept, but since not all wages and prices are flexible, this output level cannot be observed directly. So in practice, this level of potential output is impossible to measure.
As a practical alternative, potential output is commonly interpreted to be the trend level of output. Unfortunately, there are many different ways to estimate trend output, each with its own set of issues. Sometimes these estimates have strikingly different implications for monetary policy.
One reason that measures of potential GDP are difficult to estimate is that many factors — demographic and technological among them — affect potential output in any given period. So, potential output changes over time and can only be roughly estimated given current conditions.
For example, the “tech boom” of the 1990s, whose effects are still being analyzed today, has played a key role in determining U.S. potential output. However, the exact extent of the upward shift it caused in potential GDP is still uncertain. As we all know, the effect the technological revolution has had on the economy is still widely debated. Different interpretations of its effect on the economy lead to different estimates of potential GDP.
So with these difficulties in mind, how accurate have our estimates of potential GDP been? According to many researchers both inside and outside of the Federal Reserve, the accuracy of contemporaneous measures of potential GDP is not encouraging. Comparing current estimates of the gap for the period from the mid-1960s to the mid-1990s, one Fed researcher, Athanasios Orphanides, finds that more recent measures of the output gap lie almost uniformly above the contemporaneous estimates: The real-time estimates of potential output over this period were systematically overly optimistic. He points to the late 1960s as a particularly striking example. At the time, the data show that the gap between actual GDP and potential GDP was believed to be at about zero. With the benefit of hindsight, almost any estimate now would place that gap at nearly five percentage points. Taken at face value, this divergence would imply that policymakers did not recognize the considerable upward inflationary pressure that the economy was subject to at that time.
Another construct that has found a place in countercyclical monetary policy is NAIRU, or the non-accelerating inflation rate of unemployment — the unemployment rate at which inflation remains constant. The NAIRU model predicts that when unemployment is below the NAIRU, there is pressure for the inflation rate to rise; on the other hand, when unemployment is above the NAIRU, there is pressure for inflation to fall.
This too is a reasonable concept. Unfortunately, academic research has shown that estimates of NAIRU are very imprecise and are subject to significant standard deviations. Work by Jim Stock, Mark Watson, and others suggests that this imprecision exists in models where NAIRU is presumed constant and in models that allow NAIRU to vary over time as well. This conclusion is also robust to using alternative series of unemployment and inflation. My own research department estimated that the NAIRU was between 3.4 and 5.9 percent between 1983 and 2004 with a 95 percent confidence level. This is a fairly wide band of uncertainty.
The problem is that estimates with this level of imprecision are of limited use when conducting monetary policy. When policymakers are attempting to evaluate whether there is still slack in the labor market, or if any further decrease in unemployment may lead to inflationary pressures, it clearly would be preferable to have more precise estimates of NAIRU. For example, there are substantially different implications of our current 5.2 percent unemployment rate if we believe NAIRU is 3.4 percent or if we believe NAIRU is 5.9 percent or even if it is at the midpoint of 4.7 percent.
The imprecision of our estimates goes beyond just real-sector economic statistics. Take, for example, price data. Price stability will be achieved, to paraphrase the Chairman, when inflation ceases to be a factor in the decision-making processes of businesses and individuals. While this is a reasonable definition, it provides neither an exact estimate of our inflation goal, nor does it state which measure of inflation is most germane.
In terms of the latter, the debate has two parts: first, which price index should be used, and second, which measure of that index best describes inflation in today's economy. The two indices most often cited as relevant measures of inflation are the consumer price index, or CPI, and the chain price index for personal consumption expenditures, or the PCE price index. Which is more useful from a policymaker's perspective?
While the CPI and PCE price index are similar measures, they do vary in several important dimensions. One important difference is the scope of the spending they cover. The CPI is designed to approximate a typical consumer's cost of living and therefore covers direct out-of-pocket expenditures of households. PCE, on the other hand, is a broader index that includes some consumption that is government funded, such as Medicare and Medicaid, and some goods and services that are consumed without any explicit charge to the consumer.
Then there is the issue of whether to use a core measure of the chosen price index, that is, the price index excluding the food and energy sectors, or to use the headline number. The argument in favor of using a core price index is that the energy and food sectors are highly volatile components of either index and that this large volatility in monthly data often disappears over time. Of course, if the time horizon over which one is measuring inflation is long enough, it should not matter whether these volatile components are deleted or not as the noise in these components dissipates over the long horizon. But with a shorter horizon, the core price index would give the best measure of underlying price movement. On the other hand, those who argue against using core price indices believe that the volatile sectors are being systematically removed by using core measures and that these sectors may provide useful information about current and future price movement.
As some of you may know, I am on record as mildly favoring the core PCE deflator as my preferred measure of price inflation. The reason for this is that I believe the PCE deflator is a broader and more appropriate measure of underlying inflation than the CPI. Also, it is a chain-weighted index, and so it takes account of consumers' changing buying patterns as relative prices change. Therefore, to me, it reflects changes in the overall price level more accurately than the CPI, which is based on a fixed basket of goods and services. Using the core PCE also helps reduce the 'noise' in the inflation signal, enhancing its value as a monitoring device.
Listening to this one might get the feeling that data problems loom so large in my mind that I have very little faith in – or use for – quantitative guideposts to economic performance. That would be taking my comments here too far. These guideposts still contain important and relevant information for any policymaker. In fact, acknowledging their strengths and weaknesses enables one to better use these admittedly imprecise estimates more effectively.
For example, if we look at the errors in measuring the level of potential output and the output gap, we must recognize that these statistical problems can be large and important. However, if we look at the growth of output relative to trend growth we may find it a more reliable guidepost for policy evaluation, because the errors in each may well be offsetting.
This approach to policy suggests that policymakers may be better off looking at the growth rate of output relative to the growth rate of trend output and striving to achieve growth in aggregate demand approximately equal to the expected growth in potential aggregate supply.
Thus far, I have been discussing the difficulties policymakers face in determining where the economy should be, but the challenge of assessing where the economy actually is I consider only slightly less daunting. For in truth, current economic conditions are not easy to measure accurately in real time. We receive data only with a lag, and preliminary data are notoriously unreliable. In short, the data about the current state of the economy are constantly changing. History and recent events have shown these changes can at times be large.
At the heart of this problem is that data releases on the current state of the economy are often a collection of sampling, estimation, and imputation. We recognize the first of these — we do not count every item produced or every good sold — but the other issues surrounding timely data release have also proven to be quite important. There is a tradeoff here. In order to be timely, government agencies usually issue preliminary numbers before all the underlying information is available. Consequently, data available to policymakers concerning the current state of the economy are often based on estimations and imputations of data. As more complete information becomes available, agencies regularly revise their data series, and the revisions can be significant.
For example, the Bureau of Economic Analysis (BEA), the government agency that issues the GDP data, releases its first report on the nation's GDP near the end of the month following the end of a quarter. That release is called the advance report. But at the time of the advance report, the BEA does not yet have complete information, so it makes projections about certain components of GDP from incomplete source data. As time goes on, the source data become more complete. But it usually is not until the following year that better information, such as income-tax records and economic census data, is available. So the GDP data undergo a continual process of revision.
We saw this type of revision with the most recent GDP release. The advance release suggested that the economy grew at 3.1 percent in the fourth quarter of last year, but the preliminary report suggested that the economy grew at a much more robust 3.8 percent. The final GDP release issued just last week confirmed that number in spite of widespread speculation that it would be adjusted upward again. It seems that even the day before a release, significant uncertainty remains about data reporting not only on the present but on the past.
Then, once in a while we make substantial changes in addition to regular revisions of the data. About every five years the government makes major changes, called benchmark revisions, to the data for the national income and product accounts. Benchmark revisions incorporate new sources of data and may also include changes in definitions of variables or changes in methodology. To be sure, these changes are necessary, in part because our economy is constantly changing: Different types of products enter the market and different accounting methods need to be used. But they can be disruptive.
For example, a major alteration undertaken in the 1999 benchmark revisions changed the way the BEA classifies computer software purchased by business and government. Prior to the revision, this type of spending was counted as an office expense. This was adjusted in 1999, and now this type of spending is counted as investment.
The most recent benchmark revision took place in 2003, and it incorporated several new pieces of information and more reliable source data, as well as using a new price index in nonresidential construction that takes account of quality change. Again, these are important changes, but they disrupt what we know or thought we knew.
At the Federal Reserve Bank of Philadelphia we take these data revisions very seriously. We have developed a data set that gives a snapshot of the macroeconomic data available at any given date in the past. We call the information set available at a particular date a “vintage,” and we call the collection of such vintages a “real-time data set”. Using the real time data set one can pick a point in history and see exactly what data policymakers had at their disposal at that time.
For example, suppose we were to look at the growth rate of real output for the first quarter of 1977. The first time real output for that quarter was reported, the national income and product accounts showed that real output grew 5.2 percent— that is the reading in our May 1977 vintage of the real-time data set. Over time this estimate was updated and changed as better and more accurate data on that quarter became available. Today, when we look at the national income and product accounts, the growth rate of real output for the first quarter of 1977 is listed as 4.9 percent.
Now that we have established that data revisions occur, the logical next question is how significant are the revisions to our understanding of current economic conditions. Not surprisingly, revisions in any particular quarter can be substantial, but in addition, our research suggests that these benchmark revisions can go on for some time and significantly alter our view of the economy over longer periods.
For example, consider the inflation rate from 1975 to 1979 as measured by the PCE deflator. In 1995, the data showed inflation averaging 7.7 percent over that period. But it was subsequently revised down to 7.2 percent in the 1999 benchmark revisions of the data. Similarly, real output growth from 1955 to 1959 was as low as 2.7 percent in the 1995 benchmark vintage of the data, but as high as 3.2 percent in the 1999 benchmark vintage.
In short, our real-time data set indicates that data are revised extensively over time, and subsequent vintages of the data may paint a much different economic picture than earlier vintages. For my purposes today, I would point out that our real-time data set allows us to see exactly what the economy looked like to policymakers at the time they made their decisions. Are there episodes where the data available to policymakers in real time indicated they were in a much different situation than the subsequently revised data show they were? I believe there are.
Consider the situation in early October 1992. Today's data tell us the economy was in pretty good shape in late 1992. Real output grew 4.2 percent in the first quarter, 3.9 percent in the second quarter, and 4.0 percent in the third quarter. But if you read accounts from that time, policymakers were clearly worried about whether the economy was recovering from the recession, and they were contemplating actions to stimulate the economy. Why were policymakers so worried? According to the data available to them, the economy had grown just 2.9 percent in the first quarter and 1.5 percent in the second quarter. Statistics for the third quarter had not yet been released, but forecasts suggested that economic growth had not picked up much from the second quarter's anemic 1.5 percent. In addition, a number of monthly indicators pointed to a decline in the economy. Later, many of these indicators were also revised up significantly. The point is that policymakers assessing their situation in October 1992 saw themselves in a much weaker economic environment than we now know they were.
Let me offer another example that has more contemporary relevance. Over the past month we have made public several new variables in the real-time data set. Two variables of particular interest are personal saving and disposable income. These variables are especially interesting because these data lead to the saving rate in the national income accounts. The personal saving rate is defined as personal saving divided by disposable personal income.
Recently there has been a lot of talk about today's low personal saving rate in the U.S. Many economists have worried that the low personal saving rate may signal an impending slowdown in consumption growth and a precursor to a decline in aggregate demand. In light of this discussion it is interesting to ask: How good is our measurement of the current saving rate?
An examination of the historical data, the subsequent revisions in the average saving rate, and its variation over time suggests the saving rate looks very different today than it did 20 years ago. For example, the personal saving rate, according to today's statistics, peaked on an annual basis in the early 1980s. Back then, however, the early 1980s did not appear to be a time of high saving. As reported in the second quarter of 1980, the first quarter 1980 personal saving rate was 3.4 percent, the lowest since 1950 and down from a peak of 9.7 percent in the second quarter of 1975. By contrast, now the first quarter 1980 saving rate is reported to be 9.5 percent, and much of the revision has been fairly recent. Over the course of time, it was revised upward by 6.1 percentage points.
The problem with saving data for early 1980 was not that exceptional. In fact, the average saving rate between 1965 Q3 and 1999 Q2 has been revised up by 2.8 percentage points over time. The revision process typically has been upward and surprisingly large.
Why such large revisions? Personal saving is the difference between two aggregates, disposable personal income and personal outlays. These two series are collected from distinct bodies of data. Disposable personal income is the largest component of gross domestic income, which includes retained corporate income, government income from taxes and other sources, and capital consumption. These income data are collected from payroll data, Internal Revenue income tax filings, and corporate profit reports. Personal outlays are almost entirely due to personal consumption expenditures, the largest component of GDP. These data are collected from the revenues of retailers, service suppliers such as hospitals and hotels, and so forth.
Among the immediately available data, the more complete and reliable data are on the demand or product side; this is the source of GDP. Income side data are aggregated to gross domestic income, conceptually the same as GDP, but in practice differing by as much as 2.3 percent — the so-called statistical discrepancy. Typically income is undercounted. All this suggests that our measure of the saving rate is both somewhat suspect because of substantial measurement error and subject to substantial revision. In fact, large variations in personal saving across time have typically been revised away.
Will this happen again this time? We do not know for sure, but the contention that the current low estimate of the personal saving rate implies that consumption in the future will rise more slowly may be incorrect, as benchmark revisions may well result in a substantial upward revision in the current estimate. This is a good example of the difficulty experienced when a policymaker is forced to respond to data that traditionally has been significantly revised.
We have established the fact that information about our goal of monetary policy is imprecise and our understanding of even our current economic conditions is imperfect; what is a policymaker to do? Put another way, what are the implications of these real-world uncertainties and imperfections in information for the proper conduct of monetary policy?
Here I can offer a few observations. The first of these is that we must remember that we live in a data-rich environment. There are many pieces of economic data that can be examined when making policy decisions, and no one piece of data ought to get too much weight. As my examples indicate, when data are measured imprecisely, putting too much emphasis on any one number can lead to problems.
Second, it should be remembered that some of the imprecision fades with time. As I have said many times before, high frequency data tend to be highly volatile and subject to substantial revision. A policymaker must look at available data in a broad context.
In this most recent business cycle employment was very slow to come back to pre-recession levels. In fact, nonfarm payroll employment just recently reached that level. As a result, a lot of emphasis was being put on the monthly payroll growth numbers. When a good value was reported, people would assert that the labor market had finally returned to solid growth; when a bad number was reported, people grew concerned. The fact is that the standard error for the one-month change in payroll numbers is nearly 70,000, and making too much of any one monthly number is ill-advised. The fact is that monthly employment gains have averaged nearly 180,000 jobs per month over the past 12 months. Given all the data issues discussed today, it is important to not overemphasize short-term deviations while ignoring long term trends.
Third, it is important not to focus exclusively on quantitative data. Our interpretation of the numbers must be nuanced by real-world experience. As a Reserve Bank president I gather information from around my district and around the country. I believe it is of crucial importance to have ties and open communication with leaders in the worlds of business and finance. We need insight from both Main Street and Wall Street when trying to understand the underlying dynamics of the aggregate economy. I also listen to reports from my board of directors on how they see the economy performing in their sectors. In addition, the Philadelphia Federal Reserve has set up several advisory councils and ad hoc roundtables that meet for the sole purpose of discussing how they see the economy progressing and the current state of price pressures in the economy.
If I see some small signs of inflation coming through in the data and I hear from these contacts that they are raising their prices and they are constantly facing higher input prices, those small signs of inflation would be more of a concern than if my contacts were not reporting evidence of price pressure.
This type of touch and feel of the marketplace is of great import and is one of the benefits of the decentralized structure of the Federal Reserve System. The fact that there are 12 Reserve Banks allows us to gather a large amount of regional intelligence that adds depth to our understanding of current economic conditions.
Beyond all this, the fact that there is uncertainty surrounding the state of the economy and new economic information becomes available on a nearly continuous basis supports the notion that it makes sense for policymakers to move in a slow and cautious manner.
William Brainard, the well-known Yale economist, made the case for gradualism in a classic article that is now about a half century old. He suggested that policymakers should be conservative in light of this lack of complete information, meaning that their policy responses should be attenuated. In fact, he argued that policymakers operating in a world of uncertainty should compute the direction and magnitude of their optimal policy response and then do less.
This type of attenuated policy action has several intuitive benefits. First, it guides the economy in a particular direction but probably will not allow policymakers to overshoot the goal. Second, by moving slowly, policymakers have time to assess the effects of their actions on the economy and update their views on what further action needs to be taken. As Chairman Greenspan has explained, monetary policymaking is risk management. The case for gradualism rests on the assessment that the cost of taking too large of an action is larger than the cost of taking too small of an action.
However, the story does not end here. While it is true that moving in a gradual manner reduces the chances of overshooting with all its attendant costs, the policymaker cannot afford to be consistently behind the curve. Given that monetary policy affects the economy with long and variable lags, there is a chance that by acting in this attenuated fashion we will undershoot the optimal policy stance. This can be at least as costly as overshooting. Our challenge is to weigh these costs and respond appropriately to the data and attendant risks involved.
Our experience during the most recent business cycle underscores the need to be flexible in choosing the speed with which we respond to unfolding economic developments. This was a cycle noteworthy for the uncertainties surrounding it and the large number of shocks that occurred along the way. In the months following the sharp stock market decline, it was unclear how rapidly economic activity was decelerating. Once it became clear, the Federal Reserve responded aggressively, ultimately cutting the target federal funds rate to a record low 1 percent. On the other side, in light of the uncertain and attenuated pattern of recovery and expansion, the Federal Reserve has taken a gradualist approach to removing the monetary accommodation and returning to a more neutral policy stance.
Because the Fed must respond to incoming information differently in different situations, the Fed must communicate the rationale for its actions as clearly as possible in order to maintain public confidence in its commitment to its long-run goals
Of course, this openness has been an important aspect of recent monetary policy. The FOMC has been moving toward increased transparency for some time, and its communication with the markets has improved greatly over the past decade. Information about the Fed's policy goals, its assessment of the current economic situation, and its strategic direction are increasingly part of the public record.
Recently, the Federal Reserve has also taken action to expedite the release of the minutes from the FOMC meetings. Just this year the FOMC began releasing the minutes of each meeting prior to the next meeting. For example, the minutes of our March 22 nd meeting will be released next Tuesday. The minutes not only report our decisions concerning immediate action but also our sense of the key factors driving near-term economic developments and the strategic tilt to our actions going forward.
The goal of all these steps toward increased transparency is to inform markets about where the FOMC sees the economy today and where it thinks the economy is headed in the future. This is hopefully useful information that will improve the markets' understanding of our view of the economy and offer them insights into the direction of possible future policy actions.
All of these actions are steps in the right direction. It is important for the FOMC to be as open as possible. My hope is that by providing relevant information about our view of the economy and our current areas of concern, our actions will be more transparent and surprises will be the exception rather than the rule. With the benefit of hindsight I think we can say that we have come a long way in this regard, as the list of changes I just offered you suggests.
With that, let me conclude. During the course of my comments I have tried to convey to you some of the challenges we face as monetary policymakers operating in a world of imperfect information.
Given our mission, the lagged effect of our policy actions, and the inevitable imprecision with which an economy as large and complex as ours can be measured, these challenges will not go away. So, we must find ways to meet them.
Some of the problems I have discussed suggest that we often must rely on our theoretical knowledge of economics as we make decisions that affect the economy.
In addition, data gathered from our regional contacts are also of value in this process, even while the national data change shape with the arrival of new information that leads to their revision. There is value in listening and gathering the perspectives of our Reserve Bank boards of directors, advisory councils, and other regular contacts in our Districts.
Another part of the solution is to take care in choosing the pace at which we act and react to incoming data. Gradualism has a role to play in monetary policy, but not at the expense of falling behind the curve.
The last solution mentioned here is transparency and increased communication with market participants. Communication is an important part of the solution to operating in the real world of imperfect information. The increased transparency that has been the hallmark of the Greenspan Fed is an important part of optimal monetary policy in a world of uncertainty.
Thank you for listening.