Caplan Notes Flaw in Bayesian Updating Theory

In his objection to the Austrian embrace of radical uncertainty, which Bob mentioned below, Bryan Caplan writes:

"But if people really assigned p=0 to an event, than the arrival of counter-evidence should make them think that they are delusional, not than a p=0 event has occurred."

What Caplan is relying on here is the theory of "Bayesian updating," which supposedly describes how rational thinkers change the probability they assign to a new theory in the light of new evidence. I don't want to go into the details here -- they are unimportant for my point -- but the basic idea is that you "start out" by assigning some "prior probabilities" to various theories about some phenomenon, or outcomes of some event, and then multiply those "priors" by a factor based on how much more or less likely new evidence makes the prior.

For instance, you are a late 19th century physicist, and you are evaluating how likely it is the Newtonian mechanics is the true description of matter in motion. At that time, there would have been physicists who would assign p=1 to it being true, and p=0 to it being false. At the very least, many physicists would have assigned p=0 to something as weird as quantum mechanics being true!

Now, as the years pass, you are presented with startling new evidence about black body radiation, the photoelectric effect, and so on, and with a startling new theory in addition. According to the theory of Bayesian updating, the "rational" response is just to think you must be delusional in believing you have heard this new data! You had assigned an alternative theory a prior of 0, and now no factor the new evidence recommends multiplying that prior by can ever change that initial assignment of p=0.

Of course, that is not what real scientists did at all. Instead, they assigned whole new "priors" -- they thought, "Mon Dieu, I had never considered the possibility of this theory or this evidence, and therefore I was in a state of 'radical uncertainty,' and ought to re-think everything." But allowing that maneuver thwarts the whole motive for Bayesian updating, which is to turn rational choice between theories into a formal, mechanical procedure.

Thank God scientists are not "rational" Bayesian updaters, and thank Bryan Caplan for providing a good example of why it's good that they are not.

Comments

  1. I don't understand what it means to assign probabilities like this to begin with. Don't you need a repeating (or at least repeatable) process for probabilities to be meaningful?

    Yes, I can say things like "I think Obama has a 60 % change of winning" but isn't this just a way of communicating subjective strength of uncertainty? It's not like we can ever re-run history and see whether in fact, out of 100 elections Obama wins 60 of them. How can we speak scientifically of probabilities if we never have a way of determining whether the probability assigned was "right"?

    These probabilities seem to me like saying "I like vanilla ice cream twice as much as chocolate". Just because you can say that, and it communicates something to someone else, does not mean that we should take it too literally.

    ReplyDelete
  2. Yes, Jacob, you have hit upon another problem with this Bayesian updating model, and that is why I put "start out" in quotes -- somehow, there is supposed to exist some virginal moment when one is "assigning" probabilities to outcomes or theories in the absence of any prior evidence, and, of course, such a moment can never exist.

    Instead, what really occurs is a continual updating of one's "priors" which were never really "prior" in the first place, but always were formed on the basis of existing evidence.

    ReplyDelete
  3. Just curious, do you guys follow the Overcoming Bias blog? Eliezer Yudkowsky writes a lot about Bayesian reasoning; it'd be fun to watch him and Gene argue.

    ReplyDelete
  4. Anonymous5:27 PM

    Gene,
    Kind of a mistake here. Scientists see the data and they get a POSTERIOR, not a prior. Priors and posteriors are probability measures on parameter (or model) space. So, with the new data, the posterior would assign a much lower probability to the Newtonian model continuing to be true. And this new posterior is used for all future predictions, which would give a much higher probability of these quantum effects.

    If you're looking for densities on event space, usually they'd be predictive densities, prior predictive or posterior predictive. Given the prior of believing that there is a high probability of Newtonian mechanics begin true, all of the quantum phenomena are unlikely to be believed as being possible, that part is true.

    I don't see the problem.

    Best to leave these things to experts or try to read from them directly. Have you read any books on the philosophy of probability? I'll think you'll see that Bayesian theory, either subjectivist (a la de Finetti) or objectivist (a la Jeffreys and Jaynes) is relatively sound when confronted with these sorts of minor attacks.

    ReplyDelete
  5. I don't get what you think I'm mistaken about. (Oh, and I read "directly" about Bayesian updating and studied it with experts at LSE -- one of the top rational choice schools in the world.) Should the "posterior" be arrived at through Bayesian updating or not? If it is, and the prior assigned to quantum mechanics was 0, then the posterior will be 0 also. Do you disagree with that?

    Maybe what you're getting at is that I "misused" prior as coming in after the data -- but that was my point -- the scientists DON'T do Bayesian updating, and instead, effectively, set new priors.

    ReplyDelete
  6. Oh, and anonymous, it's really best you leave this sort of thing to the experts, OK?

    ReplyDelete
  7. Gene said:

    [I]t's really best you leave this sort of thing to the experts, OK?

    My prior assumption was that I was cockier than you.

    I may have to update.

    ReplyDelete

Post a Comment

Popular posts from this blog

Libertarians, My Libertarians!

"Pre-Galilean" Foolishness