For months, just like a good portion of the country, I followed closely our 2012 presidential campaign. Up to the very end, many pollsters and political pundits were saying that the election was too close to call. But, whenever friends and colleagues asked my opinion, I invariably told them to go look at FiveThirtyEight.com, the political polling website and blog created by Nate Silver.
Silver launched the FiveThirtyEight in March of 2008. He gained national attention when he correctly predicted the results of the 2008 Democratic Party presidential primaries. His final forecasts for the 2008 presidential elections predicted the winner in 49 of the 50 states, as well as the winner of every race in the Senate. The FiveThirtyEight has been affiliated with the NY Times since August of 2010.
Political forecasting attracts a variety of people. Many pollsters seem to still be using methodologies that worked well a decade or two ago, but look woefully behind the times in the emerging world of big data and advanced analytics. Most political pundits seem to inhabit a kind-of magical realism world of their own: if you say what you want to be true often enough and loud enough, it will eventually come to pass.
Silver, on the another hand, views information-based predictions, including political forecasting, as a scientific discipline. You use all available information; you analyze and extract insights out of all that information using sophisticated models and algorithms; you apply human judgement to make predictions based on those insights; and you keep evaluating and adjusting your models and predictions based on how well they perform in the real world.
But equally important, you must be aware not only of the possibilities but also the limitations and pitfalls inherent in any such predictions. In political elections, as is the case when dealing with any highly complex, fast changing, chaotic and unpredictable system, predictions can only be expressed in terms of probability distributions, and they are generally quite volatile as new information and unanticipated events are factored in. Moreover, predictions are based on models reflecting your views of how the future is likely to evolve. Different models applied to the same data can lead to widely different predictions.
Silver started tracking the 2012 election in June. In his initial forecast he estimated that Obama would win the election with 291.3 electoral votes, compared to 246.7 for Mitt Romney, which gave the President a 61.8% of reelection. As he explained in his blog, there are wide error margins inherent in such early forecasts, so they should only be taken as a pretty rough guide, not unlike forecasting the path of a hurricane 5 to 7 days later. You can give an estimate, but the cone of uncertainty is very wide.
Around mid-September, Silver’s estimates showed Obama with a 75 percent chance of winning the Electoral College and a 3 point lead in the national popular vote. He observed that in baseball, as in any other game, the probability that the team that is ahead will win keeps increasing the closer you get to the end of the game. So is the case with elections:
“Each day that Mr. Romney fails to make gains in the polls will count as an opportunity lost for him. And with each passing day, the model will become slightly more confident that a small lead in the polls will translate into an Electoral College victory for Mr. Obama, since the error in the polls becomes smaller as we get closer to Nov. 6.”
In early October, the FiveThirtyEight gave Obama an 85 percent probability of winning the Electoral College. Two weeks later, the President was still favored to win, but the probability was considerably smaller at 65 percent following his poor performance in the first presidential debate. But, the night before the election, the FiveThirtyEight forecast that Obama had over a 90 percent chance of winning the election, and was over two points ahead in the popular vote.
As it turned out, Silver correctly predicted the winner in all 50 states, including all nine highly contested swing states. He also correctly predicted the winner in 31of the 33 Senate races. As this article evaluating his predictions pointed out: “Forty-eight out of 50 states actually fell within his margin of error, giving him a success rate of 96 percent. And assuming that his projected margin of error figures represent 95 percent confidence intervals, which it is likely they did, Silver performed just about exactly as well as he would expect to over 50 trials. Wizard, indeed.”
How does he do it? The answer, I believe, is that Nate Silver brings a scientific point of view to the Wild West world of forecasting political elections. This is evident throughout his recently published book, The Signal and the Noise: Why Most Predictions Fail but Some Don't. In the book, Silver explains not only his own particular approach to information-based predictions, but examines the growing field of predictions and why so many fail in spite of, or perhaps because of the vast quantities of information we now have available. He writes in the introductory chapter:
“The exponential growth in information is sometime seen as a cure-all, as computers were in the 1970s. Chris Anderson, the editor of Wired magazine, wrote in 2008 that the sheer volume of data would obviate the need for theory, and even the scientific method. This is an emphatically pro-science and pro-technology book, and I think of it as a very optimistic one. But it argues that these views are badly mistaken. The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning. . . Data-driven predictions can succeed - but they can fail. It is when we deny our role in the process that the odds of failure rises. Before we demand more of our data, we need to demand more of ourselves.”
Silver learned his craft in the new field of sabermetrics, - the use of statistics in baseball to project a player’s performance and career. Sabermetrics was popularized by Michael Lewis in Moneyball, his bestseller book, - later turned into a film, - about Billy Beane, the Oakland Athletics general manager who used such statistical techniques to make his small-market team highly competitive against teams with much larger budgets.
Sabermetrics combined Silver’s love of statistics and baseball. He developed a sophisticated statistical system, PECOTA, for forecasting the future performance of baseball players by comparing the characteristics of the player being evaluated to the characteristics of all past and present players, and then looking at the actual career paths of those players with the most similar characteristics. This enable the system to assign a probability to different career trajectories for the player being evaluated.
Sabermetrics is now widely used by every team in baseball. But, Silver asks in his book, can statistics alone tell you everything you want to know about a player? When Moneyball first came out, many viewed it as a story about the conflict between the traditional approach of the scouts versus the new approaches being introduced by the statheads. Is there still a valuable role for the scouts, the professional talent evaluators who personally travel to learn about the players first hand by actually watching them play and meeting them in person.
Billy Bean attributes the success of the Oakland A’s not just to their statistical aptitude but to their careful scouting of amateur players. In fact, their scouting budget is now much higher than it has ever been as a way of complementing their statistical analysis. Scouting is particularly valuable when evaluating young amateur players, while they are still learning the game. This is when a good scout can spot certain intangibles that may make a player stand out, such as their mental makeup and overall attitude toward the game.
“The key to making a good forecast,” writes Silver, “is not in limiting yourself to quantitive information. Rather, it’s having a good process for weighing the information appropriately. This is the essence of Beane’s philosophy, collect as much information as possible, but then be as rigorous and disciplined as possible when analyzing it.”
The Signal and the Noise describes a number of areas where information-based predictions have been successful, including baseball, political elections and the forecasting of hurricanes. Good hurricane forecasting has life-and-death consequences. Twenty five years ago, for example, you could only predict a hurricane’s landfall 72 hours in advance to within 350 miles, an area way too large to evacuate if necessary. You can now come within a much more manageable 100 miles, and the forecasts are getting better every year. As we just saw with Hurricane Sandy, such accurate predictions truly help save lives.
“But,” writes Silver, “these cases of progress in forecasting must be weighted against a series of failures.” These include our inability to see the September 11 attacks coming as well as our inability to predict the recent global financial crisis. “There are entire disciplines in which predictions have been failing, often at great cost to society.”
As one reviewer observed, given his accomplishments and notoriety, it’s slightly heartbreaking that instead of a “geek-conquers-world” book about “his rise to statistical godliness”, Silver wrote a book examining the state of predictions in a variety of fields, with a handful of successes amidst a large number of failures. “As science, this investigation is wholly satisfying. As a literary proposition, it’s a bit disappointing. It’s always more gripping to read about how we might achieve the improbable than why we can’t.”
But, good scientists and engineers are fatalistic by nature, always worrying about what can go wrong with their predictions and designs. We have gotten pretty good at predictions and designs in the more mature fields like physics and civil engineering, but we are just learning how to do so in sociotechnical systems, that is, systems involving people and organizations. Such systems generally exhibit a level of complexity that is often beyond our ability to understand and control. Not only do we have to have to deal with very tough mathematical problems, but with the even more complex issues involved in human and organizational behaviors. Our instincts will often lead us to see patterns where there might be none. We have to work very hard to become aware of and try to overcome our biases.
“Prediction is difficult for us for the same reason that it is so important: it is where objective and subjective reality intersect,” writes Nate Silver in the concluding paragraphs of his book. “Distinguishing the signal from the noise requires both scientific knowledge and self knowledge: the serenity to accept the things we cannot predict, the courage to predict the things we can, and the wisdom to know the difference.”
Comments