Misusing Probability

Nate Silver has been getting some flack for his declaration that Donald Trump had a 2% chance of winning the GOP nomination.

One thing to note in Silver's defense: "2% chance" is not "no chance": 2% chance events happen! But the real problem is elsewhere: Silver thinks we can assign "objective" probabilities to one-off events. But to assign a probability to any potential happening in an "objective" way, we have to abstract from the particular circumstances of time and place absolutely everything that cannot be reduced to a number by which we can "objectively" place that potential happening into a class with other, past happenings taken to be "identical" to the potential happening in all relevant features except those differing numbers. (For instance, we will have to turn each presidential candidate into a point in a vector space, where the number of "factors" we choose to include in our analysis are the dimensions of the space, and candidates only differ by being positioned differently in that space.)

This works fine if we are dealing with the spin of a roulette wheel. Of course, if we could precisely measure the exact play of forces at work, we could predict exactly where the ball would come to rest each spin. But given that this is unachievable, we are justified in treating these as "random factors": given that we know of no super roulette-wheel spinner who knows precisely how to twist his wrist to deliver a ball to a particular number, or super wheel designer who knows how to manufacture the minor variations that must exist between different wheels and different balls so as to produce a particular pattern the manufacturer's client desires, we reasonably can say that if there are thirty-six numbers on the wheel, the odds of a bet on any one number paying off are 1-in-36.

But look what happens when we try to apply this method to, say, an election, as Silver does:

"But some candidates with parallels to Trump have done perfectly well in Iowa and New Hampshire. In fact, there’s been about one such Republican, on average, in every contested election cycle. Below, I’ve listed past Republican candidates who (i) had less than 5 percent of the party’s endorsement points as of the date of the Iowa caucuses, meaning they had very little support from the party establishment, but (ii) won at least 20 percent of the vote in Iowa anyway."

If we are really going to have an objective measure of probability for "Trump winning the GOP nomination," we have to assume:

1) The other "such Republicans" being placed in the class of "outsider candidates" with Trump are, for all intents and purposes, as identical to Trump as each gambler betting at the roulette wheel is, for all intents and purposes, identical to every other one. But this assumption is wildly false: presidential aspirants differ from each other vastly more than do different gamblers at a roulette wheel.

In particular, here we would have to assume that the facts that Trump is:
a) a billionaire;
b) with yuuuge name recognition; and
c) almost 40 years of experience manipulating the media...
still leave him as identical to all the other outsider candidates as each gambler is identical to all others placing their chips at the roulette table.

2) That each "contested election cycle" is, for all intents and purposes, identical to every other "contested election cycle."

In particular, here we would have to assume that the 2016 contested election cycle differs from previous ones by no more than a spin of the roulette wheel in 2016 differs from a spin in 1996. The failed Iraq war, the recent struggles of the middle class, the foibles of the surveillance state, the repeated betrayal of the Republican base by their candidates... all of those are no more than minuscule and ignorable bumps on the roulette wheel.

3) That having "less than 5 percent of the party’s endorsement points as of the date of the Iowa caucuses" has the same influence on voters in every election cycle. So we have to abstract away the very sense of betrayal by the party elites that led so many GOP voters to choose Trump and ignore party endorsements.

A statistical analysis of a concrete situation is always an abstraction, abstracting away from anything that prevents us from including the concrete situation in an abstract class of situations about which we can reason probabilistically. In the case of a roulette wheel, such an abstraction does not falsify the concrete situation very much, and thus can be used with a fair amount of confidence. But in the case of a presidential campaign, the distortions needed to fit the actual situation into a probabilistic analysis are tremendous: we must abstract away the actual candidates, and turn them into "typical candidates," abstract away the actual circumstances, and turn them into "typical circumstances," and abstract away the actual electorate, and turn it into a "typical electorate."

And I don't object to anyone who wants to perform all three of the above magical feats: I just don't want anyone to be duped into thinking that such a magician has arrived at some "objective" probability of what the actual outcome will be.

And, of course, my analysis here is not original. For instance, see Ludwig von Mises:

'Case probability is a particular feature of our dealing with problems of human action. Here any reference to frequency is inappropriate, as our statements always deal with unique events which as such--i.e., with regard to the problem in question--are not members of any class. We can form a class "American presidential elections." This class concept may prove useful or even necessary for various kinds of reasoning, as, for instance, for a treatment of the matter from the viewpoint of constitutional law. But if we are dealing with the election of 1944--either, before the election, with its future outcome or, after the election, with an analysis of the factors which determined the outcome--we are grappling with an individual, unique, and nonrepeatable case. The case is characterized by its unique merits, it is a class by itself. All the marks which make it permissible to subsume it under any class are irrelevant for the problem in question.'


PS: And note Silver could have chosen completely different abstractions to focus his probabilistic analysis upon: why not classify candidates by "net worth" and "years of media exposure" instead of "outsider status" and "party endorsements"? I bet the analysis would have come out a bit differently!

Comments

  1. I think the statistical abstraction issues you list are moot so long as the predictor can prove to be well-calibrated: 5% of their 5%-assigned-probability events happen, 10% of their 10%-assigned, and so on. That would establish that they're appropriately abstracting over the relevant classes.

    The problem, though, is that Silver has failed at that for this cycle. While, I don't have the figures for nominations, I know that his primary predictions has given Bernie Sanders several "1% chance to win" forecasts that Sanders went on to win; he's well over winning 1% of those!

    ReplyDelete
    Replies
    1. Here's the thing, Silas: I DON'T object to someone trying to do what Silver is doing. But even if the predictor shows as well calibrated for a while, we have strong reasons to believe, while all of these vital factors left out, that that is a transient state of affairs. As shown by Silver's mistakes this cycle!

      Delete
  2. Being a billionaire may be an important factor, but it worked relatively poorly for the political career of the Rockefellers.

    ReplyDelete
  3. Great post! Am I right to say that you differ from Mises in that Mises thinks it's illegitimate to use typical probabalistic analysis for case probability and you think it's flawed but could conceivably be useful if done right?

    ReplyDelete

Post a Comment

Popular posts from this blog

Libertarians, My Libertarians!

"Pre-Galilean" Foolishness