Robustness and Ambiguity in Continuous Time
We use statistical detection theory in a continuous-time environment to provide a new perspective on calibrating a concern about robustness or an aversion to ambiguity. A decision maker repeatedly confronts uncertainty about state transition dynamics and a prior distribution over unobserved states or parameters. Two continuous-time formulations are counterparts of two discrete-time recursive specifications of Hansen and Sargent (2007) [16]. One formulation shares features of the smooth ambiguity model of Klibanoff et al. (2005) and (2009) [24] and [25]. Here our statistical detection calculations guide how to adjust contributions to entropy coming from hidden states as we take a continuous-time limit.
@article{hansensargent:2011robustness, title={Robustness and Abiguity in Continuous Time}, author={Hansen, Lars Peter and Sargent, Thomas J.}, journal={Journal of Economic Theory}, volume={146}, number={3}, pages={1195--1223}, year={2011}, publisher={Elsevier} }✕
Risk Price Dynamics
We present a novel approach to depicting asset-pricing dynamics by characterizing shock exposures and prices for alternative investment horizons. We quantify the shock exposures in terms of elasticities that measure the impact of a current shock on future cash flow growth. The elasticities are designed to accommodate nonlinearities in the stochastic evolution modeled as a Markov process. Stochastic growth in the underlying macroeconomy and stochastic discounting in the representation of asset values are central ingredients in our investigation. We provide elasticity calculations in a series of examples featuring consumption externalities, recursive utility, and jump risk.
This paper was originally presented as the Journal of Financial Econometrics Lecture at the June 2009 SoFiE conference.
@article{bhhs:2011, title={Risk-Price Dynamics}, author={Borovi{v{c}}ka, Jaroslav and Hansen, Lars Peter and Hendricks, Mark and Scheinkman, Jos{'e} A}, journal={Journal of Financial Econometrics}, volume={9}, number={1}, pages={3--65}, year={2011}, publisher={Oxford Univ Press} }✕
Wanting Robustness in Macroeconomics
Robust control theory is a tool for assessing decision rules when a decision maker distrusts either the specification of transition laws or the distribution of hidden state variables or both. Specification doubts inspire the decision maker to want a decision rule to work well for a ? of models surrounding his approximating stochastic model. We relate robust control theory to the so-called multiplier and constraint preferences that have been used to express ambiguity aversion. Detection error probabilities can be used to discipline empirically plausible amounts of robustness. We describe applications to asset pricing uncertainty premia and design of robust macroeconomic policies.
@article{hansens:2000wanting, title={Wanting Robustness in Macroeconomics}, author={Hansen, Lars Peter and Sargent, Thomas J. and others}, journal={Manuscript, Department of Economics, Stanford University. Website: www. stanford. edu/sargent}, volume={4}, year={2000} }✕
Robust Hidden Markov LQG Problems
For linear quadratic Gaussian problems, this paper uses two risk-sensitivity operators defined by Hansen and Sargent (2007b) to construct decision rules that are robust to misspecifications of (1) transition dynamics for state variables and (2) a probability density over hidden states induced by Bayes’ law. Duality of risk sensitivity to the multiplier version of min–max expected utility theory of Hansen and Sargent (2001) allows us to compute risk-sensitivity operators by solving two-player zero-sum games. Because the approximating model is a Gaussian probability density over sequences of signals and states, we can exploit a modified certainty equivalence principle to solve four games that differ in continuation value functions and discounting of time t increments to entropy. The different games express different dimensions of concerns about robustness. All four games give rise to time consistent worst-case distributions for observed signals. But in Games I–III, the minimizing players’ worst-case densities over hidden states are time inconsistent, while Game IV is an LQG version of a game of Hansen and Sargent (2005) that builds in time consistency. We show how detection error probabilities can be used to calibrate the risk-sensitivity parameters that govern fear of model misspecification in hidden Markov models.
@article{hms:2010, title={Robust Hidden Markov LQG Problems}, author={Hansen, Lars Peter and Mayer, Ricardo and Sargent, Thomas}, journal={Journal of Economic Dynamics and Control}, volume={34}, number={10}, pages={1951--1966}, year={2010}, publisher={Elsevier} }✕
Modeling and Measuring Systemic Risk
An important challenge worthy of NSF support is to quantify systemic financial risk. There are at least three major components to this challenge: modeling, measurement, and data accessibility. Progress on this challenge will require extending existing research in many directions and will require collaboration between economists, statisticians, decision theorists, sociologists, psychologists, and neuroscientists.
@article{bhkkl:2010, title={Modeling and Measuring Systemic Risk}, author={Brunnermeier, Markus K. and Hansen, Lars Peter and Kashyap, Anil K. and Krishnamurthy, Arvind and Lo, Andrew W}, year={2010} }✕
Fragile Beliefs and the Price of Model Uncertainty
A representative consumer uses Bayes’ law to learn about parameters of several models and to construct probabilities with which to perform ongoing model averaging. The arrival of signals induces the consumer to alter his posterior distribution over models and parameters. The consumer’s specification doubts induce him to slant probabilities pessimistically. The pessimistic probabilities tilt toward a model that puts long-run risks into consumption growth. That contributes a countercyclical history-dependent component to prices of risk.
@article{hansensargent:2010, title={Fragile Beliefs and the Price of Uncertainty}, author={Hansen, Lars Peter and Sargent, Thomas J.}, journal={Quantitative Economics}, volume={1}, number={1}, pages={129--162}, year={2010}, publisher={Wiley Online Library} }✕
Pricing Kernels and Stochastic Discount Factors
@article{hansenrenault:2009, title={Pricing Kernels and Stochastic Discount Factors}, author={Hansen, Lars P and Renault, Eric}, journal={Encyclopedia of Quantitative Finance}, pages={1--17}, year={2009}, publisher={Wiley Hoboken, NJ} }✕
Nonlinearity and Temporal Dependence
Nonlinearities in the drift and diffusion coefficients influence temporal dependence in diffusion models. We study this link using three measures of temporal dependence: ?-mixing, ?-mixing and ?-mixing. Stationary diffusions that are ?-mixing have mixing coefficients that decay exponentially to zero. When they fail to be ??-mixing, they are still ?-mixing and ?-mixing; but coefficient decay is slower than exponential. For such processes we find transformations of the Markov states that have finite variances but infinite spectral densities at frequency zero. The resulting spectral densities behave like those of stochastic processes with long memory. Finally we show how state dependent, Poisson sampling alters the temporal dependence.
@article{chc:2010, title={Nonlinearity and Temporal Dependence}, author={Chen, Xiaohong and Hansen, Lars Peter and Carrasco, Marine}, journal={Journal of Econometrics}, volume={155}, number={2}, pages={155--169}, year={2010}, publisher={Elsevier} }✕
Operator Methods for Continuous-Time Markov Processes
This chapter surveys relevant tools, based on operator methods, to describe the evolution in time of continuous-time stochastic process, over different time horizons. Applications include modeling the long-run stationary distribution of the process, modeling the short or intermediate run transition dynamics of the process, estimating parametric models via maximum-likelihood, implications of the spectral decomposition of the generator, and various observable implications and tests of the characteristics of the process.
@article{ahs:2008, title={Operator Methods for Continuous-Time Markov Processes}, author={A{"i}t-Sahalia, Yacine and Hansen, Lars P and Scheinkman, Jos{'e} A}, journal={Handbook of financial econometrics}, volume={1}, pages={1--66}, year={2008} }✕
Nonlinear Principal Components and Long Run Implications of Multivariate Diffusions
We investigate a method for extracting nonlinear principal components (NPCs). These NPCs maximize variation subject to smoothness and orthogonality constraints; but we allow for a general class of constraints and multivariate probability densities, including densities without compact support and even densities with algebraic tails. We provide primitive sufficient conditions for the existence of these NPCs. By exploiting the theory of continuous-time, reversible Markov diffusion processes, we give a different interpretation of these NPCs and the smoothness constraints. When the diffusion matrix is used to enforce smoothness, the NPCs maximize long-run variation relative to the overall variation subject to orthogonality constraints. Moreover, the NPCs behave as scalar autoregressions with heteroskedastic innovations; this supports semiparametric identification and estimation of a multivariate reversible diffusion process and tests of the overidentifying restrictions implied by such a process from low-frequency data. We also explore implications for stationary, possibly nonreversible diffusion processes. Finally, we suggest a sieve method to estimate the NPCs from discretely-sampled data.
@article{chs:2009, title={Nonlinear Principal Components and Long-Run Implications of Multivariate Diffusions}, author={Chen, Xiaohong and Hansen, Lars Peter and Scheinkman, Jos{'e}}, journal={The Annals of Statistics}, pages={4279--4312}, year={2009}, publisher={JSTOR} }✕