Click on the items listed below to see some recent research material.
This early paper gives a continuous-time, stochastic formulation of robust control theory and a characterization of prices. While this paper was substantially revised and given a new title, the original manuscript remains interesting and is cited in our subsequent work. This paper provided the impetus for much of our subsequent work. The published paper is "A Quartet of Semigroups for Model Specification, Robustness, Prices of Risk and Model Detection." View at JSTOR
For each of three types of ambiguity, we compute a robust Ramsey plan and an associated worst-case probability model. Ex post, ambiguity of type I implies endogenously distorted homogeneous beliefs, while ambiguities of types II and III imply distorted heterogeneous beliefs. Martingales characterize alternative probability specications and clarify distinctions among the three types of ambiguity. We use recursive formulations of Ramsey problems to impose local predictability of commitment multipliers directly. To reduce the dimension of the state in a recursive formulation,we transform the commitment multiplier to accommodate the heterogeneous beliefs that arise with ambiguity of types II and III. Our formulations facilitate comparisons of the consequences of these alternative types of ambiguity.
We use statistical detection theory in a continuous-time environment to provide a new perspective on calibrating a concern about robustness or an aversion to ambiguity. A decision maker repeatedly confronts uncertainty about state transition dynamics and a prior distribution over unobserved states or parameters. Two continuous-time formulations are counterparts of two discrete-time recursive specifications of Hansen and Sargent (2007). One formulation shares features of the smooth ambiguity model of Klibanoff et al. (2005) and (2009). Here our statistical detection calculations guide how to adjust contributions to entropy coming from hidden states as we take a continuous-time limit.
We provide small noise expansions for the value function and decision rule for the recursive risk-sensitive preferences specified by Hansen and Sargent (1995), Hansen, Sargent and Tallarini (1999), and Tallarini (2000). We use the expansions (1) to provide a fast method for approximating solutions of dynamic stochastic problems, and (2) to quantify the effects on decisions of uncertainty and concerns about robustness to misspecification.
Robust control theory is a tool for assessing decision rules when a decision maker distrusts either the specification of transition laws or the distribution of hidden state variables or both. Specification doubts inspire the decision maker to want a decision rule to work well for an empty set of models surrounding his approximating stochastic model. We relate robust control theory to the so-called multiplier and constraint preferences that have been used to express ambiguity aversion. Detection error probabilities can be used to discipline empirically plausible amounts of robustness. We describe applications to asset pricing uncertainty premia and design of robust macroeconomic policies.
A representative consumer uses Bayes’ law to learn about parameters of several models and to construct probabilities with which to perform ongoing model averaging. The arrival of signals induces the consumer to alter his posterior distribution over models and parameters. The consumer’s specification doubts induce him to slant probabilities pessimistically. The pessimistic probabilities tilt toward a model that puts long-run risks into consumption growth. That contributes a countercyclical history-dependent component to prices of risk.
Reinterpreting most of the market price of risk as a price of model uncertainty eradicates a link between asset prices and measures of the welfare costs of aggregate fluctuations that was proposed by Hansen et al., Tallarini, and Alvarez and Jermann [Lars Peter Hansen, Thomas Sargent, Thomas Tallarini, Robust permanent income and pricing, Rev. Econ. Stud. 66 (1999) 873–907; Thomas D. Tallarini, Risk-sensitive real business cycles, J. Monet. Econ. 45 (3) (2000) 507–532; Fernando Alvarez, Urban J. Jermann, Using asset prices to measure the cost of business cycles, J. Polit. Econ. 112 (6) (2004) 1223–1256]. Prices of model uncertainty contain information about the benefits of removing model uncertainty, not the consumption fluctuations that Lucas [Robert E. Lucas Jr., Models of Business Cycles, Basil Blackwell, Oxford and New York, 1987; Robert E. Lucas Jr., Macroeconomic priorities, American Economic Review, Papers and Proceedings 93 (2003) 1–14] studied. A max–min expected utility theory lets us reinterpret Tallarini's risk-aversion parameter as measuring a representative consumer's doubts about the model specification. We use model detection instead of risk-aversion experiments to calibrate that parameter. Plausible values of detection error probabilities give prices of model uncertainty that approach the Hansen and Jagannathan [Lars Peter Hansen, Ravi Jagannathan, Implications of security market data for models of dynamic economies, J. Polit. Econ.99 (1991) 225–262] bounds. Fixed detection error probabilities give rise to virtually identical asset prices as well as virtually identical costs of model uncertainty for Tallarini's two models of consumption growth.
We study how a concern for robustness modifies a policy
marker’s incentive to experiment. A policy maker has a prior over two submodels of inflation-unemployment dynamics. One submodel implies an exploitable trade-off, the other does
not. Bayes’ law gives the policy maker an incentive
to experiment. The policy maker fears that both submodels
and his prior probability distribution over them are misspecified.
We compute decision rules that are robust to misspecifications of each submodel and of the prior distribution over submodels. We compare robust rules to ones that Cogley, Colacito and Sargent (2007) computed assuming that the models and the
prior distribution are correctly specified. We explain how the policy maker’s
desires to protect against misspecifications of the submodels,
on the one hand, and misspecifications of the prior over them, on the other,
have different effects on the decision rule.
This essay examines the problem of inference
within a rational expectations model from two
perspectives: that of an econometrician and that
of the economic agents within the model. The
assumption of rational expectations has been
and remains an important component to quantitative
research. It endows economic decision
makers with knowledge of the probability law
implied by the economic model. As such, it is an
equilibrium concept. Imposing rational expectations
removed from consideration the need for
separately specifying beliefs or subjective components
of uncertainty. Thus, it simplified model
specification and implied an array of testable
implications that are different from those considered
previously. It reframed policy analysis
by questioning the effectiveness of policy levers
that induce outcomes that differ systematically
from individual beliefs.
In a Markov decision problem with hidden
state variables, a posterior distribution serves as a state variable and Bayes’ law under an approximating model gives its law of
motion. A decision maker expresses fear that his model is misspecified
by surrounding it with a set of alternatives that are nearby when measured by
their expected log likelihood ratios (entropies). Martingales represent
alternative models. A decision maker constructs a sequence of robust decision
rules by pretending that a sequence of minimizing players choose increments to
martingales and distortions to the prior over the hidden state. A risk
sensitivity operator induces robustness to perturbations of the approximating
model conditioned on the hidden state. Another risk sensitivity operator
induces robustness to the prior distribution over the hidden state. We use these
operators to extend the approach of Hansen and Sargent
[Discounted linear exponential quadratic Gaussian control, IEEE Trans. Automat.
Control 40(5) (1995) 968–971] to problems that contain hidden states.
A decision maker fears that data are generated by a statistical perturbation of an approximating model that is either a controlled diffussion or a controlled measure over continuous functions of time. A perturbation is constrained by relative entropy. Several two-player zero-sum games yield robust decision rules and are related to one another and to max-min expected utility theory of Gilboa and Schmeidler (1989). Alternative sequential and non-sequential versions of robust control theory present identical robust decision rules that are dynamically consistent in a useful sense.
This paper studies robust decision problems with hidden state variables. It gives the recursive implementation of the commitment solution with discounting from robust control theory. The recursive implication shows formally how discounting and commitment are encoded in the robust decision rules. We suggest alternative recursive formulations of the decision problem that are attractive alternatives to the commitment solution.
A
representative agent fears that his model, a continuous time Markov process
with jump and diffusion components, is misspecified
and therefore uses robust control theory to make decisions. Under the decision
maker’s approximating model, cautious behavior puts adjustments for model
misspecification into market prices for risk factors. We use a statistical
theory of detection to quantify how much model misspecification the decision
maker should fear, given his historical data record. A semigroup
is a collection of objects connected by something like the law of iterated
expectations. The law of iterated expectations defines the semigroup
for a Markov process, while similar laws define other semigroups.
Related semigroups describe (1) an approximating
model; (2) a model misspecification adjustment to the continuation value in the
decision maker’s Bellman equation; (3) asset prices; and (4) the behavior of
the model detection statistics that we use to calibrate how much robustness the
decision maker prefers. Semigroups 2, 3, and 4
establish a tight link between the market price of uncertainty and a bound on
the error in statistically discriminating between an approximating and a worst
case model.
This paper describes links between the max-min expected utility theory of Itzhak Gilboa and David Schmeidler (1989) and the applications of robust-control theory proposed by Evan Anderson et al. (2000) and Paul Dupuis et al. (1998).
Dynamic stochastic equilibrium models of the macro economy are designed to match
the macro time series including impulse response functions. Since these models aim
to be structural, they also have implications for asset pricing. To assess these implications,
we explore asset pricing counterparts to impulse response functions. We use
the resulting dynamic value decomposition (DVD) methods to quantify the exposures
of macroeconomic cash flows to shocks over alternative investment horizons and the
corresponding prices or compensations that investors must receive because of the exposure
to such shocks. We build on the continuous-time methods developed in Hansen
and Scheinkman (2010), Borovicka et al. (2011) and Hansen (2011) by constructing
discrete-time shock elasticities that measure the sensitivity of cash flows and their
prices to economic shocks including economic shocks featured in the empirical macroeconomics
literature. By design, our methods are applicable to economic models that
are nonlinear, including models with stochastic volatility. We illustrate our methods
by analyzing the asset pricing model of Ai et al. (2010) with tangible and intangible
capital.
In this white paper we identify the need for innovative research to improve our ability to quantify systemic financial risk.
There are at least three major components to this challenge: modeling, measurement, and data accessibility.
Progress on this challenge will require extending existing research in many directions and will require
collaboration between economists, statisticians, decision theorists, sociologists, psychologists, and neuroscientists. This paper was submitted as part
of American Economic Association's response to the 2010 National Science Foundation call for white papers "to frame innovative research for the year 2020 and
beyond" in the social and behavioral sciences.
Sparked by the recent "great recession" and the role of financial markets, considerable interest exists among researchers within both the academic community and
the public sector in modeling and measuring systemic risk. In this essay I draw on
experiences with other measurement agendas to place in perspective the challenge of
quantifying systemic risk, or more generally, of providing empirical constructs that
can enhance our understanding of linkages between financial markets and the macroeconomy.
Recursive utility models of the type introduced by Kreps and Porteus (1978) are
used extensively in applied research in macroeconomics and asset pricing in environments with uncertainty. These models represent preferences as the solution to a
nonlinear forward-looking difference equation with a terminal condition. Such preferences feature investor concerns about the intertemporal composition of risk. In
this paper we study infinite horizon specifications of this difference equation in the
context of a Markov environment. We establish a connection between the solution
to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of
the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we
explore a substantive link between large deviation bounds for tail events for stochastic
consumption growth and preferences induced by recursive utility.
I explore methods that characterize model-based valuation of stochastically growing cash
flows. Following previous research, I use stochastic discount factors as a
convenient device to depict asset values. I extend that literature by focusing on
the impact of compounding these discount factors over alternative investment horizons. In modeling cash
flows, I also incorporate stochastic growth factors. I explore
dynamic value decomposition (DVD) methods that capture concurrent compounding of a stochastic growth and discount factors in determining risk-adjusted values.
These methods are supported by factorizations that extract martingale components
of stochastic growth and discount factors. These components reveal which ingredients
of a model have long-term implications for valuation. The resulting martingales imply convenient changes in measure that are distinct from those used in mathematical
finance, and they provide the foundations for analyzing model-based implications for
the term structure of risk prices. As an illustration of the methods, I re-examine
some recent preference based models. I also use the martingale extraction to revisit the value implications of some benchmark models with market restrictions and
heterogenous consumers.
I explore the equilibrium value implications of economic models that incorporate responses to a stochastic environment with growth. I propose dynamic valuation decompositions (DVD's)
designed to distinguish components of an underlying economic model that influence values over long investment horizons from components that impact only the short run. A DVD represents the values of stochastically
growing claims to consumption payoffs or cash flows using a stochastic discount process that both discounts the future and adjusts for risk. It is enabled by constructing operators indexed by the elapsed time between
the trading date and the date of the future realization of the payoff. Thus formulated, methods from applied mathematics permit me to characterize valuation behavior and the term structure of risk prices in a revealing manner.
I apply this approach to investigate how investor beliefs and the associated uncertainty are reflected in current-period values and risk-price elasticities.
We characterize the compensation demanded by investors in equilibrium for incremental exposure to growth-rate risk. Given an underlying Markov diffusion that governs the state variables in the economy,
the economic model implies a stochastic discount factor process S. We also consider a reference growth process G that may represent the growth in the payoff of a single asset or of the macroeconomy. Both S and G are modeled conveniently
as multiplicative functionals of a multidimensional Brownian motion. We consider the pricing implications of parametrized family of growth processes G ε, with G 0=G, as ε is made small. This parametrization defines a direction of
growth-rate risk exposure that is priced using the stochastic discount factor S. By changing the investment horizon, we trace a term structure of risk prices that shows how the valuation of risky cash flows depends on the investment horizon.
Using methods of Hansen and Scheinkman (Econometrica 77:177–234, 2009), we characterize the limiting behavior of the risk prices as the investment horizon is made arbitrarily long.
We present a novel approach to depicting asset-pricing dynamics by characterizing shock exposures and prices for alternative investment horizons. We quantify the shock exposures
in terms of elasticities that measure the impact of a current shock on future cash flow growth. The elasticities are designed to accommodate nonlinearities in the stochastic evolution modeled as a Markov process.
Stochastic growth in the underlying macroeconomy and stochastic discounting in the representation of asset values are central ingredients in our investigation. We provide elasticity calculations in a series of examples
featuring consumption externalities, recursive utility, and jump risk. (JEL: C52, E44, G12)
In this entry we characterize pricing kernels or stochastic discount factors that are used
to represent valuation operators in dynamic stochastic economies. A kernel is commonly used
mathematical term used to represent an operator. The term stochastic discount factor
extends concepts from economics and finance to include adjustments for risk. As we will
see, there is a tight connection between the two terms. The terms pricing kernel and
stochastic discount factor are often used interchangeably. After deriving convenient
representations for prices, we provide several examples of stochastic discount factors
and discuss econometric methods for estimation and testing of
asset pricing models that restrict the stochastic discount factors.
We
build a family of valuation operators indexed by the
increment of time between the payoff date and the current period value. These
operators are necessarily related by what is known as
· "Consumption Strikes Back?: Measuring Long Run Risk," with J.C. Heaton and N. Li, Journal of Political Economy, Volume 116, Issue 2, April 2008, 260-302. View at JSTOR
We study structural models of stochastic discount factors and explore alternative methods of estimating such models using data on macroeconomic risk and asset returns. Particular attention is devoted to recursive utility models in which risk aversion can be modified without altering intertemporal substitution. We characterize the impact of changing the intertemporal substitution and risk aversion parameters on equilibrium short-run and long-run risk prices and on equilibrium wealth.
· "Intertemporal Substitution and Risk Aversion," with J. Heaton, J. Lee, N. Roussanov, Handbook of Econometrics, Volume 6, Part 1, 2007, pp. 3967-4056. View at ScienceDirect
We characterize
and measure a long-term risk-return trade-off for the valuation of cash flows
exposed to fluctuations in macroeconomic growth. This trade-off features risk
prices of cash flows that are realized far into the future but continue to be
reflected in asset values. We apply this analysis to claims on aggregate cash
flows and to cash flows from value and growth portfolios by imputing values to
the long-run dynamic responses of cash flows to macroeconomic shocks. We
explore the sensitivity of our results to features of the economic valuation
model and of the model cash flow dynamics.
·
"Intangible
Risk?" with J.C. Heaton and N. Li, Measuring Capital in the New Economy (NBER Books), Corrado, Haltiwanger and Sichel, eds., 2005, 111-152.
This early paper investigates a method for extracting nonlinear
principal components. These components maximize variation subject to smoothness
and orthogonal constraints; but we allow for a
general class of densities and constraints, including densities without compact
support and even densities with algebraic tails. We also characterize the
limiting behavior of the associated eigenvalues, the
objects used to quantify the incremental importance of the principal
components. A major portion of this paper was published in the Annals of Statistics under the title: "Nonlinear Principal Components
and Long Run Implications of Mutivariate
Diffusions."
I explore the equilibrium value implications of economic models that incorporate responses to a stochastic environment with growth. I propose dynamic valuation decompositions (DVD's) designed to distinguish components of an underlying economic model that influence values over long investment horizons from components that impact only the short run. A DVD represents the values of stochastically growing claims to consumption payoffs or cash flows using a stochastic discount process that both discounts the future and adjusts for risk. It is enabled by constructing operators indexed by the elapsed time between the trading date and the date of the future realization of the payoff. Thus formulated, methods from applied mathematics permit me to characterize valuation behavior and the term structure of risk prices in a revealing manner. I apply this approach to investigate how investor beliefs and the associated uncertainty are reflected in current-period values and risk-price elasticities.
Nonlinearities in the drift and diffusion coefficients influence temporal dependence in diffusion models. We study this link using three measures of temporal dependence: ρ−mixing, β−mixing and α−mixing. Stationary diffusions that are ρ − mixing have mixing coefficients that decay exponentially to zero. When they fail to be ρ−mixing, they are still β−mixing and α−mixing; but coefficient decay is slower than exponential. For such processes we find transformations of the Markov states that have finite variances but infinite spectral densities at frequency zero. The resulting spectral densities behave like those of stochastic processes with long memory. Finally we show how state-dependent, Poisson sampling alters the temporal dependence.
This chapter surveys relevant tools, based on operator methods, to describe the evolution in time of continuous-time stochastic process, over different time horizons.
Applications include modeling the long-run stationary distribution of the process, modeling the short or intermediate run transition dynamics of the process, estimating parametric models via maximum-likelihood, implications of the
spectral decomposition of the generator, and various observable implications and tests of the characteristics of the process.
We investigate a method for extracting nonlinear principal components
(NPCs). These NPCs maximize variation subject to smoothness and orthogonality
constraints; but we allow for a general class of constraints and multivariate
probability densities, including densities without compact support and
even densities with algebraic tails. We provide primitive sufficient conditions
for the existence of these NPCs. By exploiting the theory of continuous-time,
reversible Markov diffusion processes, we give a different interpretation of
these NPCs and the smoothness constraints. When the diffusion matrix is
used to enforce smoothness, the NPCs maximize long-run variation relative
to the overall variation subject to orthogonality constraints. Moreover, the
NPCs behave as scalar autoregressions with heteroskedastic innovations; this
supports semiparametric identification and estimation of a multivariate reversible
diffusion process and tests of the overidentifying restrictions implied
by such a process from low-frequency data. We also explore implications for
stationary, possibly nonreversible diffusion processes. Finally, we suggest a
sieve method to estimate the NPCs from discretely-sampled data.
We create an analytical structure that reveals the long-run risk-return relationship for nonlinear continuous-time Markov environments.
We do so by studying an eigenvalue problem associated with a positive eigenfunction for a conveniently chosen family of valuation operators. The members of this family are indexed by the elapsed time between payoff and valuation dates,
and they are necessarily related via a mathematical structure called a semigroup. We represent the semigroup using a positive process with three components: an exponential term constructed from the eigenvalue, a martingale, and a transient
eigenfunction term. The eigenvalue encodes the risk adjustment, the martingale alters the probability measure to capture long-run approximation, and the eigenfunction gives the long-run dependence on the Markov state. We discuss sufficient
conditions for the existence and uniqueness of the relevant eigenvalue and eigenfunction. By showing how changes in the stochastic growth components of cash flows induce changes in the corresponding eigenvalues and eigenfunctions, we reveal
a long-run risk-return trade-off.
A
representative agent fears that his model, a continuous time Markov process
with jump and diffusion components, is misspecified
and therefore uses robust control theory to make decisions. Under the decision
maker’s approximating model, cautious behavior puts adjustments for model
misspecification into market prices for risk factors. We use a statistical
theory of detection to quantify how much model misspecification the decision
maker should fear, given his historical data record. A semigroup
is a collection of objects connected by something like the law of iterated
expectations. The law of iterated expectations defines the semigroup
for a Markov process, while similar laws define other semigroups.
Related semigroups describe (1) an approximating
model; (2) a model misspecification adjustment to the continuation value in the
decision maker’s Bellman equation; (3) asset prices; and (4) the behavior of
the model detection statistics that we use to calibrate how much robustness the
decision maker prefers. Semigroups 2, 3, and 4
establish a tight link between the market price of uncertainty and a bound on
the error in statistically discriminating between an approximating and a worst
case model.
This
paper shows how to identify nonparametrically scalar
stationary diffussions from discrete-time data. The
local evolution of the diffusion is characterized by a drift and diffussion coefficient along with the specification of
boundary behavior. We recover this local evolution from two objects that can be
inferred directly from discrete-time data: the stationary density and a
conveniently chosen eigenvalue-eigenfunction pair of
the conditional expectation operator over a unit interval of time. This
construction also lends itself to a spectral characterization of the
over-identifying restrictions implied by a scalar diffusion model of a
discrete-time Markov process.
In this
article we characterize and estimate the process for short-term interest rates
using federal funds interest rate data. We presume we are observing a
discrete-time sample of a stationary scalar diffusion. We concentrate in a
class of models in which the local volatility elasticity is constant and the
drift has a flexible specification. To accommodate missing observations and to
break the link between "economic time" and calendar time, we model
the sampling scheme as an increasing process that is not directly observed. We
propose and implement two methods for estimation. We find evidence for a
volatility elasticity between one and one-half and two. When interest rates are
high, local mean reversion is small and the mechanics for introducing stationarity is the increased volatility of the diffusion
process.
We
develop and apply bootstrap methods for diffusion models when fitted to the
long run as characterized by the stationary distribution of the data. To obtain
bootstrap refinements to statistical inference, we simulate candidate diffusion
processes. We use these bootstrap methods to assess measurements of local mean
reversion or “pull” to the center of the distribution for short-term interest
rates. We also use them to evaluate the fit of the model to the empirical
density.
Continuous-time Markov processes can be characterized
conveniently by their infinitesimal generators. For such processes there exist
forward and reverse-time generators. We show how to use those generators to
construct moment conditions implied by stationary Markov processes. Generalized
methods of moments estimators and tests can be constructed using these moment
conditions. The resulting econometric methods are to
be applied to discrete-time data obtained by sampling continuous-time Markov
processes.
Description: These are unpublished proofs for my paper "Large Sample Properties of Generalized Method of Moments Estimators", Econometrica , Volume 50, Number 4, July 1982, pp. 1029-1054. View at JSTOR
We develop methods for testing the hypothesis that an econometric model is underidentified and inferring the nature of the failed identification. By adopting a generalized method-of moments perspective, we feature directly the structural relations and we allow for nonlinearity in the econometric specification. We establish the link between a test for overidentication and our proposed test for underidentification. If, after attempting to replicate the structural relation, we find substantial evidence against the overidentifying restrictions of an augmented model, this is evidence against underidentification of the original model.
Description: GMM entry for The New Palgrave Dictionary of Economics, Second Edition, 2008.
It gives a perspective on the time series formulation and application of Generalized Method of Moments estimation. This file corresponds to the original paper that appeared later in the Encyclopedia, somewhat modified and under the new title of 'Method of Moments.' The full reference for the published version is: International Encyclopedia of the Social and Behavioral Sciences, N. J. Smelser and P. B. Bates (editors), Pergamon: Oxford, 2000.