Modeling to Make the Most of Data

Economic modeling is often driven by discovery or development of new data sets. For many substantive applications, the data does not simply “speak for itself,” making it important to build structural models to interpret the evidence in meaningful ways.

For example, the availability of data from national income and product accounts was an early influence on formal macroeconomic modeling. As other evidence on the economic behavior of individuals and firms became available, builders of dynamic economic models incorporated microeconomic foundations in part to  bring to bear a broader array of evidence on macroeconomic policy challenges.  Similarly, econometricians built and applied new methods for panel data analysis to understand better the empirical implications of microeconomic data with statistical rigor.

As large cross-sections of financial market returns became easily accessible to researchers, asset pricing theorists built models that featured economically interpretable risk-return tradeoffs to give an economic interpretation to the observable patterns in financial market data.In all of these cases, data availability provoked new modeling efforts, and these efforts were crucial in bringing new evidence to bear on important policy-relevant economic questions.

Rapid growth in computing power and the expansion of electronic marketplaces and information sharing to all corners of life have vastly expanded the data available to individuals, enterprises, and policy makers.  Important computational advances in data science have opened the door to analyses of massive new data sets, which potentially offers new insights to a variety of questions important in economic analysis.  The richness of this new data provides flexible ways to make predictions about individual and market behavior— for example, the assessment of credit risk when taking out loans and implications for markets of consumer goods including housing.

These are topics explored at previous institute events such as the Macro Financial Modeling 2016 Conference held on January 27–29, 2016. One example is the work of Matthew Gentzkow and Jesse Shapiro and a conference on the use of text as data. Another example is the construction and use of new measures of policy uncertainty developed by Scott R. Baker, Nicholas Bloom, and Steve Davis.

Just as statisticians have sought to provide rigor to the inferential methods used in the data analysis, econometricians now have new challenges in enriching these modeling efforts beyond straightforward data description and prediction. While the data may be incredibly rich along some dimensions, many policy-relevant questions require the ability to transport this richness into other hypothetical settings, as is often required when we wish to know likely responses to new policies or changes in the underlying economic environment. This more subtle but substantively important form of prediction requires both economic and statistical modeling to fully exploit the richness of the data and the power of the computational methods.

The door is wide open for important new advances in economic modeling well suited to truly learn from the new data. By organizing conferences like the  September 23-24, 2016 event, the Becker Friedman Institute is nurturing a crucial next step of how best to integrate formal economic analysis to address key policy questions. We seek to foster communication among a variety of scholars from computer science, statistics, and economics in addressing new research challenges. This conference, organized by Stephane Bonhomme, John Lafferty, and Thibaut Lamadon, will encourage the synergistic research efforts of computational statistics and econometrics.

— Lars Peter Hansen, Becker Friedman Institute Director