## Working Papers

**Dynamic Information Design with Diminishing Sensitivity Over News** (with Jetlir Duraj)

[abstract] [download pdf] [arXiv]

A benevolent sender communicates non-instrumental information over time to a Bayesian receiver who experiences gain-loss utility over changes in beliefs (“news utility”). We show how to inductively compute the optimal dynamic information structure for arbitrary news-utility functions. With diminishing sensitivity over the magnitude of news, unlike in piecewise-linear news-utility models, one-shot resolution of uncertainty is strictly suboptimal under commonly used functional forms. We identify additional conditions that imply the sender optimally releases good news in small pieces but bad news in one clump. By contrast, information structures that deliver bad news gradually are never optimal. When the sender lacks commitment power, good-news messages confront a credibility problem given the receiver’s diminishing sensitivity. Without loss aversion, the babbling equilibrium is essentially unique. More loss-averse receivers may enjoy higher equilibrium news-utility, contrary to the commitment case.

**Mislearning from Censored Data: The Gambler’s Fallacy in Optimal-Stopping
Problems**

[abstract] [download pdf] [online
appendix] [arXiv]

I study endogenous learning dynamics for people expecting systematic reversals from random
sequences — the “gambler’s fallacy.” Biased agents face an
optimal-stopping problem, such as managers conducting sequential interviews. They are
uncertain about the underlying distribution (e.g. talent distribution in the labor pool) and
must learn its parameters from their predecessors. Agents stop when early draws
are deemed “good enough,” so predecessors’ experience contain negative
streaks but not positive streaks. Since biased agents understate the likelihood of
consecutive below-average draws, society converges to over-pessimistic beliefs about the distribution’s mean. When early agents decrease their acceptance thresholds due to
pessimism, later agents will become more surprised by the lack of positive reversals in
their predecessors’ histories, leading to more pessimistic inferences and lower acceptance thresholds — a positive-feedback cycle. Agents who are additionally
uncertain about the distribution’s variance believe in fictitious variation (exaggerated
variance) to an extent depending on the severity of data censoring.

**Network Structure and Naive Sequential Learning** (with Krishna Dasaratha)

Conditionally accepted at *Theoretical Economics*

[abstract] [download pdf] [slides]
[arXiv]

We study a sequential-learning model featuring a network of naive agents with Gaussian information structures. Agents apply a heuristic rule to aggregate predecessors’ actions. They weigh these actions according the strengths of their social connections to different predecessors. We show this rule arises endogenously when agents wrongly believe others act solely on private information and thus neglect redundancies among observations. We provide a simple linear formula expressing agents’ actions in terms of network paths and use this formula to characterize the set of networks where naive agents eventually learn correctly. This characterization implies that, on all networks where later agents observe more than one neighbor, there exist disproportionately influential early agents who can cause herding on incorrect actions. Going beyond existing social-learning results, we compute the probability of such mislearning exactly. This allows us to compare likelihoods of incorrect herding, and hence expected welfare losses, across network structures. The probability of mislearning increases when link densities are higher and when networks are more integrated. In partially segregated networks, divergent early signals can lead to persistent disagreement between groups.

**An Experiment on Network Density and Sequential Learning** (with Krishna Dasaratha)

[abstract] [download pdf] [pre-registration] [arXiv]

We conduct a sequential social learning experiment where subjects guess a hidden state after observing private signals and the guesses of a subset of their predecessors. A network determines the observable predecessors, and we compare subjects’ accuracy on sparse and dense networks. Later agents’ accuracy gains from social learning are twice as large in the sparse treatment compared to the dense treatment. Models of naive inference where agents ignore correlation between observations predict this comparative static in network density, while the result is difficult to reconcile with rational-learning models.

**Player-Compatible Equilibrium** (with Drew Fudenberg)

[abstract] [download
pdf] [arXiv]

*Player-Compatible Equilibrium* (PCE) imposes cross-player restrictions on the
magnitudes of the players’ “trembles” onto different strategies. These
restrictions capture the idea that trembles correspond to deliberate experiments by agents who
are unsure of the prevailing distribution of play. PCE selects intuitive equilibria in a
number of examples where trembling-hand perfect equilibrium (Selten, 1975) and proper
equilibrium (Myerson, 1978) have no bite. We show that rational learning and some near-optimal
heuristics imply our compatibility restrictions in a steady-state setting.

**Payoff Information and Learning in Signaling Games** (with Drew Fudenberg)

[abstract] [download
pdf] [arXiv]

We show how to add the assumption that players know their opponents' payoff functions to the theory of learning in games, and use it to derive restrictions on signaling-game play in the spirit of divine equilibrium. In our learning model, agents are born into player roles and play the game against a random opponent each period. Inexperienced agents are uncertain about the prevailing distribution of opponents' play, and update their beliefs based on their observations. Long-lived and patient senders experiment with every signal that they think might yield an improvement over their myopically best play. We show that divine equilibrium (Banks and Sobel, 1987) is nested between “rationality-compatible” equilibrium, which corresponds to an upper bound on the set of possible learning outcomes, and “uniform rationality-compatible” equilibrium, which provides a lower bound.

## Published Papers

**Learning and Type Compatibility in Signaling Games** (with Drew Fudenberg)

*Econometrica* 86(4):1215-1255, July 2018

[abstract] [download
pdf] [online appendix] [publisher’s DOI] [arXiv]

Which equilibria will arise in signaling games depends on how the receiver interprets
deviations from the path of play. We develop a micro-foundation for these off-path beliefs,
and an associated equilibrium refinement, in a model where equilibrium arises through
non-equilibrium learning by populations of patient and long-lived senders and receivers. In
our model, young senders are uncertain about the prevailing distribution of play, so they
rationally send out-of-equilibrium signals as experiments to learn about the behavior of the
population of receivers. Differences in the payoff functions of the types of senders generate
different incentives for these experiments. Using the Gittins index (Gittins, 1979), we
characterize which sender types use each signal more often, leading to a constraint on the
receiver’s off-path beliefs based on “type compatibility” and hence a
learning-based equilibrium selection.

**Bayesian Posteriors for Arbitrarily Rare Events** (with Drew Fudenberg and Lorens Imhof)

*Proceedings of the National Academy of Sciences* 114(19):4925-4929, May 2017

[abstract] [download pdf] [publisher’s DOI] [arXiv]

We study how much data a Bayesian observer needs to correctly infer the relative likelihoods
of two events when both events are arbitrarily rare. Each period, either a blue die or a red
die is tossed. The two dice land on side 1 with unknown probabilities \(p_1\) and \(q_1\),
which can be arbitrarily low. Given a data-generating process where \(p_1 \ge c q_1\), we are
interested in how much data is required to guarantee that with high probability the observer's
Bayesian posterior mean for \(p_1\) exceeds \((1-\delta)c\) times that for \(q_1\). If the
prior densities for the two dice are positive on the interior of the parameter space and
behave like power functions at the boundary, then for every \(\epsilon >0\), there exists a
finite \(N\) so that the observer obtains such an inference after \(n\) periods with
probability at least \(1-\epsilon\) whenever \(n p_1 \ge N\). The condition on \(n\) and
\(p_1\) is the best possible. The result can fail if one of the prior densities converges to
zero exponentially fast at the boundary.

**Differentially Private and Incentive Compatible Recommendation System for the
Adoption of Network Goods** (with Xiaosheng
Mu)

*Proceedings of the Fifteenth ACM Conference on Economics and Computation*
(EC’14):949-966, June 2014

[abstract] [download pdf] [slides]
[publisher’s DOI]

We study the problem of designing a recommendation system for network goods under the
constraint of differential privacy. Agents living on a graph face the introduction of a new
good and undergo two stages of adoption. The first stage consists of private, random
adoptions. In the second stage, remaining non-adopters decide whether to adopt with the help
of a recommendation system \(\mathcal{A}\). The good has network complimentarity, making it
socially desirable for \(\mathcal{A}\) to reveal the adoption status of neighboring agents.
The designer’s problem, however, is to find the socially optimal \(\mathcal{A}\) that
preserves privacy. We derive feasibility conditions for this problem and characterize the
optimal solution.