Addressing the Uncertainty Behind Economic Policy: Conversation with Charles Evans

Explaining the thinking behind policy for a standing-room only audience of about 150, the Chicago Federal Reserve Bank president said analysis is followed by a key caveat: “I could be wrong about that.” Tempering action with such caution is essential when the economic stakes are so high–and Fed officials have to consider different ways they could err, and which types of errors would be most costly for the economy.

For instance, Charles Evans said keeping the funds rate so low for so long to bring down unemployment could have been a mistake if it led to high inflation. “But I could be wrong about that too, and inflation could be too low. [Those are] two big problems. If inflation increases, we know how to deal with that. On the other side, the risk [of deflation] is very different, and I think more deadly.”

Evans’ remarks framed a candid conversation with Lars Peter Hansen that focused on a topic that the 2013 Nobel laureate meditates on frequently: uncertainty. Evans and his fellow policymakers on the Federal Open Market Committee deal with such uncertainty and the very real economic consequences of being wrong every day. Hansen wrestles with uncertainty on a more theoretical basis, studying ways to incorporate it in macroeconomic models like those the Fed relies on.

What if the Federal Reserve could test different policy treatments to address inflation, unemployment, or slow growth? Hansen joked that the statistician in him would like to see more experimentation from the Fed to test counterfactuals; it could produce interesting new data.

Evans pointed out that Fed can’t exactly split the economy into segments and run experiments. But he agreed he’d like to see more testing and more competition between different models. He mentioned research he has conducted with Lawrence Christiano and Martin Eichenbaum to document factors that make statistical data more robust.

Researchers can try to encapsulate as much of that data in their models as possible but no model can be reconciled 100 percent with every bit of real-world economic activity. “Writing down a coherent model, subject to some discipline, that’s a lot of restrictions to place on the data. And the data aren’t going to like what you try to make it explain.”

The pair also agreed that they’d both like to see more acknowledgement of uncertainty in the policy realm. Hansen asked Evans what prevents this. Evans pointed to the difficulty of forecasting economic outcomes. “As a policymaker, we have to have opinions about how a wide variety of data are going to turn out,” said Evans. “But writing down a coherent model, subject to some discipline, that’s a lot of restrictions to place on the data. The data are not going to agree with all of the predictions of the model.”

Hansen cited editorials in the Wall Street Journal and the New York Times that use overly simplistic data to draw opposing conclusions on the effectiveness of Keynesian stimulus. “There’s scope for the greatest divergence in opinions when historical evidence is weak,” said Hansen. Not acknowledging the unknown “opens the data that’s out there to multiple interpretations, often dramatically different opinions of the same data. In my view, it diminishes the discourse around public policy.”

Evans agreed:  “The way I would like debates like this to carry forward would be that someone brings their analysis to the table, and they make predictions, like [the Fed is] willing to do.” He noted that coherent models make very specific predictions for how the economy is expected to evolve over the next few years. Evans noted that policymakers also make specific forecasts but “with the acknowledgement that we might have the model wrong in mind.” He said that robust policy prescriptions should not only specify how policy should evolve if events work out as expected, but also how policy should react if predictions turn out to be wrong.

One way the Fed has tried to acknowledge and reduce uncertainty is with forward guidance– statements about how the Fed is likely to adjust policy if economic conditions in the future evolve as predicted. Given the data dependence of such statements, Hansen wondered “Forward guidance not always so clear; it seems to be an agenda for allowing flexibility. Is the point to buy the Fed some wiggle room?”

Evans said that while laying out the thinking of Fed policymakers in advance does help people and firms anticipate shifts in monetary policy, the realities of keeping the market stable and achieving consensus among the 17 members of the FOMC means that policy must be conditional, since it is shaped by a wide array of information. “I’m amenable to explicit numbers, but it’s hard to do,” said Evans.

Turning to the issue of forecasting, Hansen asked about how the Fed uses financial markets to guide policy or spot asset pricing bubbles and respond to them. Evans said the Fed does use market data, but pointed out that market data tend to be noisy. For instance, Treasury Inflation-Protected Securities can be used to measure inflation compensation demanded by investors. However, changing market conditions can affect the nature of the information conveyed by these prices. Nonetheless, even with significant measurement issues, asset prices can provide useful information regarding investors’ attitudes towards future inflation outcomes.

As for bubbles, Evans agreed with Hansen that bubbles are always easier to spot in hindsight, He cited former Fed chair Ben Bernanke’s analysis that suggested it would be better to clean up bubbles afterward than to intervene with unpredictable consequences.        

Perhaps imperfectly, Fed regulators must look at systemic issues of their actions, including unintended consequences, said Evans. The role that Hansen and colleagues looking at macrofinance can play? Questioning policymakers’ methods of monitoring and evaluating, and challenging them to be better.

–Mark Riechers and Toni Shears