Posted by Curt on 18 May, 2016 at 4:00 pm. 2 comments already!


Jazz Shaw:

While FiveThirtyEight has once again run up a remarkable record of success in predicting primary races this year, one thing that Nate Silver and his team of magical statistics elves got wrong for quite a while was the rise of Donald Trump. Nate gave Trump a 2% chance of winning the nomination last August and that figure had only risen to 12 or 13 percent by late January just before the voting began. This week he looks back at how they missed the rise of The Donald and what, if anything, they might do better next time.

First, he explains that before any voting had actually taken place, their model relied a lot less on math and more on what amounted to punditry.

Unlike virtually every other forecast we publish at FiveThirtyEight — including the primary and caucus projections I just mentioned — our early estimates of Trump’s chances weren’t based on a statistical model. Instead, they were what we sometimes called ”subjective odds” — which is to say, educated guesses. In other words, we were basically acting like pundits, but attaching numbers to our estimates.3 And we succumbed to some of the same biases that pundits often suffer, such as not changing our minds quickly enough in the face of new evidence. Without a model as a fortification, we found ourselves rambling around the countryside like all the other pundit-barbarians, randomly setting fire to things.

Beautifully written and it’s not terribly hard for we laymen to follow. But what about the future? While I still disagree with at least part of the fundamental premise, Nate makes some observations about the scientific method which are sound advice for everyone. In particular, note the part about how an honest scientist should always fail forward when their model falls apart and their forecast fails to materialize.

This instinct to be accountable for one’s predictions is good since the conceit of “data journalism,” at least as I see it, is to apply the scientific method to the news. That means observing the world, formulating hypotheses about it, and making those hypotheses falsifiable. (Falsifiability is one of the big reasons we make predictions.1) When those hypotheses fail, you should re-evaluate the evidence before moving on to the next subject. The distinguishing feature of the scientific method is not that it always gets the answer right, but that it fails forward by learning from its mistakes.

If we were only talking about raw science in the field of physics, chemistry or one of the other mainstays of academic pursuit, I’d agree with pretty much everything Nate is saying. I’m a big fan of the scientific method myself. But we run into huge problems when try to transfer these principles from the field of physics over to the wilderness of anthropology and behavioral science. If you fill a beaker with a liter of tap water and begin reducing the temperature it will begin to freeze at around zero degrees centigrade. If you repeat the experiment but mix in half a kilogram of salt, the temperature where it freezes will go down. You can repeat the experiment over and over, trusting in the same results. If it fails to happen, you can do precisely what Nate Silver suggests and go back to study what caused the anomaly. Perhaps someone swapped out your tap water with ethyl alcohol or your assistant ran out of salt and replaced your supply with something else. In any event, an answer will be found and you can do better when you run the test again.

People, however, don’t follow those sorts of rules. We’re not all grains of salt or drops of water and some of us will freeze or boil at the most unexpected of times. Even worse, we’re frequently not as consistent as we might like to think. Our views can shift over time and some of us will occasionally break out some downright psychotic moves which are beyond prediction.

So how does Nate get it right so often? It’s because he’s taking some of the mystery out of the process and basing predictions on people telling him what they plan to do. It’s a fairly good gauge to be sure… too good if you ask me. (More on that below.) But when you allow your own opinions and bias to take the place of what the people on the phone are telling your or how they filled out their ballots at their polling place you can miss a lot. Further, no matter how good of a track record a pollster accumulates, there’s always going to be a low turnout election or a sudden explosion in the news cycle that can knock your best analysis pear shaped.

All of the polling science aside, though, one lesson we can take from this is my long standing argument that polling is ruining politics. It could be explained in yet another emotional diatribe, but since we’re on Nate’s science kick at the moment, think of it as an example of The Observer Principle. This states that in any experiment, the act of observing the phenomenon changes the outcome. For you quantum theory enthusiasts, the most telling example of this isthe double slit experiment. (If you’ve never seen that one, definitely check it out and watch this video. It can blow your mind.) But studying elections is a far more clear cut example of the phenomenon than observing electrons and arguing over whether or not light is acting like a particle or a wave.

Read more

0 0 votes
Article Rating
Would love your thoughts, please comment.x