We study how a concern for robustness modifies a policymaker’s incentive to experiment. A policymaker has a prior over two submodels of inflation-unemployment dynamics. One submodel implies an exploitable trade-off, the other does not. Bayes’ law gives the policymaker an incentive to experiment. The policymaker fears that both submodels and his prior probability distribution over them are misspecified. We compute decision rules that are robust to misspecifications of each submodel and of the prior distribution over submodels. We compare robust rules to ones that Cogley, Colacito, and Sargent (2007) computed assuming that the models and the prior distribution are correctly specified. We explain how the policymaker’s desires to protect against misspecifications of the submodels, on the one hand, and misspecifications of the prior over them, on the other, have different effects on the decision rule.
Research Topic: Risk, Robustness and Ambiguity
Recursive Robust Estimation and Control without Commitment
In a Markov decision problem with hidden state variables, a posterior distribution serves as a state variable and Bayes’ law under an approximating model gives its law of motion. A decision maker expresses fear that his model is misspecified by surrounding it with a set of alternatives that are nearby when measured by their expected log likelihood ratios (entropies). Martingales represent alternative models. A decision maker constructs a sequence of robust decision rules by pretending that a sequence of minimizing players choose increments to martingales and distortions to the prior over the hidden state. A risk sensitivity operator induces robustness to perturbations of the approximating model conditioned on the hidden state. Another risk sensitivity operator induces robustness to the prior distribution over the hidden state. We use these operators to extend the approach of Hansen and Sargent [Discounted linear exponential quadratic Gaussian control, IEEE Trans. Automat. Control 40(5) (1995) 968–971] to problems that contain hidden states.
Robust Control and Model Misspecification
A decision maker fears that data are generated by a statistical perturbation of an approximating model that is either a controlled diffusion or a controlled measure over continuous functions of time. A perturbation is constrained in terms of its relative entropy. Several different two-player zero-sum games that yield robust decision rules are related to one another, to the max–min expected utility theory of Gilboa and Schmeidler [Maxmin expected utility with non-unique prior, J. Math. Econ. 18 (1989) 141–153], and to the recursive risk-sensitivity criterion described in discrete time by Hansen and Sargent [Discounted linear exponential quadratic Gaussian control, IEEE Trans. Automat. Control 40 (5) (1995) 968–971]. To represent perturbed models, we use martingales on the probability space associated with the approximating model. Alternative sequential and nonsequential versions of robust control theory imply identical robust decision rules that are dynamically consistent in a useful sense.
Robust Estimation and Control Under Commitment
In a Markov decision problem with hidden state variables, a decision maker expresses fear that his model is misspecified by surrounding it with a set of alternatives that are nearby as measured by their expected log likelihood ratios (entropies). Sets of martingales represent alternative models. Within a two-player zero-sum game under commitment, a minimizing player chooses a martingale at time 0. Probability distributions that solve distorted filtering problems serve as state variables, much like the posterior in problems without concerns about misspecification. We state conditions under which an equilibrium of the zero-sum game with commitment has a recursive representation that can be cast in terms of two risk-sensitivity operators. We apply our results to a linear quadratic example that makes contact with findings of T. Ba?ar and P. Bernhard [H?-Optimal Control and Related Minimax Design Problems, second ed., Birkhauser, Basel, 1995] and P. Whittle [Risk-sensitive Optimal Control, Wiley, New York, 1990].
A Quartet of Semigroups for Model Specification, Robustness, Prices of Risk and Model Detection
A representative agent fears that his model, a continuous time Markov process with jump and diffusion components, is misspecified and therefore uses robust control theory to make decisions. Under the decision maker’s approximating model, cautious behavior puts adjustments for model misspecification into market prices for risk factors. We use a statistical theory of detection to quantify how much model misspecification the decision maker should fear, given his historical data record. A semigroup is a collection of objects connected by something like the law of iterated expectations. The law of iterated expectations defines the semigroup for a Markov process, while similar laws define other semigroups. Related semigroups describe (1) an approximating model; (2) a model misspecification adjustment to the continuation value in the decision maker’s Bellman equation; (3) asset prices; and (4) the behavior of the model detection statistics that we use to calibrate how much robustness the decision maker prefers. Semigroups 2, 3, and 4 establish a tight link between the market price of uncertainty and a bound on the error in statistically discriminating between an approximating and a worst case model.
Robust Control and Model Uncertainty
Robust Permanent Income and Pricing
“… I suppose there exists an extremely powerful, and, if I may so speak, malignant being, whose whole endeavours are directed toward deceiving me.” Rene Descartes, Meditations, II.