Monday, October 29, 2012

Hurricanes and modeling

Source: The Weather Channel
I recently had a conversation with a businessperson that got me thinking about models. We first discussed the election; he predicted a Romney win based on some things he's seen locally (he lives in an important swing state) and his own instincts. He sees a lot of Romney signs in neighborhoods, and most of the people he works with like Romney. I expressed skepticism, noting that most of the models I follow are predicting an Obama win, including in that particular state. Knowing of economists' fondness for models, he said something like, "yeah well I think you place too much trust in models." This reflected also a broader attitude that we often see in politically vocal bussinesspeople. They know what they need to know about the world because they have deep, hands-on experience with the nuts and bolts of how it works. How could I possibly know more about politics in his state than he can? After all, he lives there.

Later in the conversation, we were talking about the ongoing energy revolution in the United States. He has some expertise in both the chemical properties of several energy resources and the industrial uses to which they can be put. He made a suggestion about optimal trade policy with regards to energy exports; the suggestion itself is not as important as was his implicit definition of optimal policy. For him, an optimal policy is one that benefits the specific industries with which he is familiar. It was something more general than, but related to, the old sayings about how what's good for GM is good for the country, or what's good for the manufacturing industry is good for America, or whatever. This kind of producer-biased sentiment tends to drive the bulk of our national economic policy.

This conversation occurred against the backdrop of the approaching Hurricane Sandy. The hurricane has more or less followed the path predicted by most of the models employed by NOAA and other modelers. I wonder how many people would reject the models' predictions, preferring instead to rely on their own gut instinct and personal experience with wind and rain to forecast the path of the storm. Most of us would find that silly, even though we know that the hurricane models have been wrong before (or have had to make serious forecast revisions at the last minute). Most of us don't think having lived through lots of wind and rain qualifies a person to make hurricane predictions.

The huge benefit of models is that they allow us to summarize complexity in a useful way. They won't always be right. But if they're built in plausible ways, they can tell us a lot of things that we can't learn from simple experience. In the case of Sandy, the big payoff of the models was that they told us the storm would make a sharp left turn, hit the coast of the mid-Atlantic/Northeast, and run into independent weather patterns moving east across the continent--creating a sort of "perfect storm" which is more dangerous than the basic classification of the hurricane itself would suggest. The model doesn't do anything to reduce the strength of the storm, but its predictions probably save millions of dollars and hundreds of lives.

Of course, weather modeling is very advanced compared with other forms of predictive modeling. But I think it's still useful for showing the benefits of modeling in complex systems generally. Electoral outcomes are pretty complex--the Electoral College determines the outcome of the presidential race, and it depends on a lot of state-by-state factors. A good model incorporates not only economic fundamentals and polls but also the mechanisms through which those factors have driven outcomes in the past. I told my friend explicitly, "the models aren't always right, but they incorporate far more relevant information than your gut instinct." In fact, probably most of the information he is using to make his prediction is also employed in poll-based models, along with a lot of other information he isn't using.

The other conversation--the one about natural gas--also reminded me of the usefulness of models. To economists, optimal policy is rarely defined in terms of benefits to producers. The purpose of production is to consume. We usually define optimal policy in terms of a welfare measure that captures something like consumer surplus or happiness. It's hard--really hard--to determine the effects of a producer-side policy on that kind of welfare criterion. It's easy enough to see how a policy that helps a specific industry benefits firm owners in that industry. It's even reasonably easy to see how it benefits employees in that industry. But it's hard to see how it affects people who have nothing to do with that industry but nevertheless fall under the stewardship of the relevant policymakers. What will it do to wages? Returns to certain kinds of capital? The price of complements and substitutes to the outputs of the targeted industries? How will it affect markets for other energy resources? Conceptually, this isn't so different from the difficulty of knowing what will be the effects of jet streams or continental weather patterns on Sandy. It's hard to keep track of so many considerations without a model.

Models help us get outside our own experience. They help us see that opinion may be more diverse than the handful of yard signs we see on our commute might suggest. They help us see that policies have effects on more than just the specific fields in which we work. They help us see that hurricanes might not behave how our intuition says they behave.

Models help us identify some of the variety of mechanisms linking complex systems. Models are best used with good judgment, but in many cases judgment alone is unlikely to adequately account for the wide range of things that matter. I'm not suggesting that any model is better than no model. I'm not saying Nate Silver is as good at predicting elections as NOAA is at predicting hurricanes.* I'm certainly not saying that economic modeling is an exact science. I'm simply saying that it's foolish to write off the implications of tested models whenever they contradict our own opinions.


*In his new book, Silver devotes an entire chapter to weather forecasting. He notes that hurricane predictions "have become 350 percent more accurate in the past 25 years" (page 141) and that "the science of weather forecasting is a success story" (page 127).