Monday, April 22, 2013

Reinhart/Rogoff and policy outcomes: Let's be careful about drawing causal inference

By now everyone knows about the Great Reinhart and Rogoff Implosion of 2013. If you don't, read this. What has most amused me is that many journalists seem to think that the R&R 90% threshold had a nontrivial, causal impact on US fiscal policy. Consider the following headlines:
One journalist, Tim Fernholz, set out to answer the question, "How influential was the Rogoff-Reinhart study warning that high debt kills growth?" (here). But he does not answer that question, at least if "influence" is meant to refer to policy outcomes. The question he does answer is, "Are there examples of policymakers citing R&R in support of fiscal restraint?" The answer to that question, as Fernholz shows, is "yes." Fair enough.

If we operate from the assumption that policymakers are highly amenable to evidence; and if we further assume that a descriptive (not causal) result from a single empirical study is enough to drive a policymaker's decisions, then maybe a story about policymakers discussing R&R is enough to show that it influenced actual policies. But I think those are bad assumptions. Here's my Twitter conversation with Fernholz:


First off, he's a nice guy--some journalists don't respond to critical Twitter questions from nobodies. As I understand it, Fernholz believes that R&R90% affected the opinions of marginal, outcome-influencing lawmakers enough to change their vote. There is a counterfactual world in which R&R did the work more acceptably and fiscal policy outcomes in the United States are more accommodative.

Call me skeptical. Chalk it up to my cynicism about the degree to which politicians care about evidence. In any case, if they cared that much about evidence, they might have asked someone whether the R&R90% result could reasonably have been considered to be a causal point estimate. I think any responsible "wonk" would have told them they need more data and theory before changing their mind. In any case, the fact that a few policymakers, like Paul Ryan, mentioned R&R90% in support of their policy preferences is not sufficient evidence that the result drove policy. It's just as likely--some might say more likely--that policymakers have preexisting preferences about policy which they justify by grabbing whatever research results seem to support them.

This is slightly ironic since a reasonable criticism of R&R90% before we knew it was bogus was that there was no causal story and no attempt to tease one out of the data. We have to be careful about touting descriptive results as causal. Let's apply that standard to our attempts to blame Reinhart and Rogoff for whatever fiscal policies happened that we don't like.

Bottom line: If this matters, I think a really useful thing for a journalist to do would be to back up assertions of a causal chain from R&R90% to US fiscal policy outcomes with a story about the lawmakers who had the marginal vote and were swayed by the research. Fernholz seems to have a headstart since he already knows of some legislators who were swayed by the R&R claims.


2 comments:

  1. Since I can understand this post, I figure I should comment. I definitely lean heavily your way, even though I don't share the same degree of cynicism towards politics as you. It just beggars belief to think that this one study was really influencing anybody in a more than de minimis way.

    Interesting stuff!

    ReplyDelete
  2. Thanks Austin. I'm entirely open to the possibility that my skepticism is unwarranted. If so, I think it would make for a great story if someone wants to go write it. A few examples of marginal voters in Congress who were swayed by what they thought to be strong evidence against their priors would go a long way to convince me that this study really did have an outsize influence on policy. If it's really true that some seemingly minor spreadsheet or methodology errors made fiscal policy tighter than it would have otherwise been, I'd say that's a pretty big story!

    ReplyDelete