An except from my forthcoming book, âThe Signal and the Noise,â was published this week in The New York Times Magazine. You can find an online version of the except here.
The book takes a comprehensive look at prediction across 13 fields, ranging from sports betting to earthquake forecasting. Since 2009, I have been traveling the country to meet with experts and practitioners in each of these fields in an effort to uncover common bonds. The book asks an ambitious question: What makes predictions succeed or fail?
It was enlightening to speak with men and women at the forefront of science and technology. But I found that despite their best efforts, their predictions have often gone poorly:
[If] prediction is the truest way to put our information to the test, we have not scored well. In November 2007, economists in the Survey of Professional Forecasters - examining some 45,000 economic-data series - foresaw less than a 1-in-500 chance of an economic meltdown as severe as the one that would begin one month later. Attempts to predict earthquakes have continued to envisage disasters that never happened and failed to prepare us for those, like the 2011 disaster in Japan, that did.
The discipline of meteorology is an exception. Weather forecasts are much better than they were 10 or 20 years ago.
A quarter-century ago, for instance, the average error in a hurricane forecast, made three days in advance of landfall, was about 350 miles. That meant that if you had a hurricane sitting in the Gulf of Mexico, it might just as easily hit Houston or Tallahassee, Fla. - essentially the entire Gulf Coast was in play, making evacuation and pla nning all but impossible.
Today, although there are storms like Hurricane Isaac that are tricky for forecasters, the average miss is much less: only about 100 miles.
The article explores how weather forecasters have managed to achieve this, and what we might learn from them.
It is not a simple story, exactly. The book, like this blog, is detail-oriented. In fact, one of the arguments that it advances is that we are sometimes too willing to take elegantly written narratives as substitutes for a more uncertain truth.
But there is a healthy balance between computer modeling and human judgment in weather forecasting that is lacking in many other disciplines. Usually, we either take the output from poorly designed models too credulously (as in the case of the models that asserted mortgage-backed securities were incredibly safe investments), or we value our own subjective judgment much too highly (as in the case of baseball in the pre-'Moneyball' era.)
< p>Weather forecasters, however, have an unusually good sense of the strengths and weaknesses of these approaches:But there are literally countless other areas in which weather models fail in more subtle ways and rely on human correction. Perhaps the computer tends to be too conservative on forecasting nighttime rainfalls in Seattle when there's a low-pressure system in Puget Sound. Perhaps it doesn't know that the fog in Acadia National Park in Maine will clear up by sunrise if the wind is blowing in one direction but can linger until midmorning if it's coming from another. These are the sorts of distinctions that forecasters glean over time as they learn to work around potential flaws in the computer's forecasting model, in the way that a skilled pool player can adjust to the dead spots on the table at his local bar.
I hope that you will consider reading the article and the book.
No comments:
Post a Comment