When Has a Prediction Market Failed?
Scott Sumner, in aside to his case for an GDP prediction market, writes the following concerning Intrade's market on the Supreme Court's Obamacare decision:
It also won't work to see how well Intrade predictions fare over a large set of such one-off predictions and then simply assume that this accuracy applies to individual forecasts: If we found that, on average, Intrade predicts pretty well, that is consistent with any amount of sub-optimality for individual predictions, so long as the times the odds are too low are balanced by the times they are too high. (Even that, of course, assumes that it means something for a prediction for a unique event to be optimal.)
A bookie solves this puzzle quite simply, but in a question begging fashion for Sumner's purposes: the "optimal" forecast is the one that keeps him in the position of a risk-free collector of the vigorish. He has no concern at all about what odds are "optimal" in the sense of giving the best possible prediction of what will really happen.
Sumner approaches the problem by giving some reasons to think 80% was not an unreasonable guess. But that doesn't back a claim of optimality, but only the much weaker claim that the guess was not outlandish.
So is there a meaning to the claim that some prediction of some unique event is optimal? And if there is, is there a way to demonstrate that optimality?
The market didn’t ”fail” at all, the 80% forecast was probably the optimal forecast... Sure, there was always some uncertainty, that’s what 80% means. That’s why the market didn’t price in a 100% chance of the law being overturned....This raises an interesting issue in probability theory, related to the Mises brothers' concerns about case probability: For a unique event that will never be repeated, what, exactly, does it mean to have an "optimal" forecast? Sumner's analogy is no help here: We can say that the 50% forecast for the coin toss was optimal because events very similar to that coin toss will be repeated again and again, and, in the long run, the 50% forecast will prove much more accurate than the 58% forecast. This obviously won't work for determining if Intrade's 80% forecast for Obamacare being struck down was optimal: nothing remotely close enough to allow the use of a frequentist interpretation of probability will ever occur again.
Consider the following analogy: Two prediction markets are set up to predict the toss of the coin before the next Super Bowl. One says 50% odds of heads and the other says 58% odds of heads. Then the coin is tossed, and it’s heads. Which market “failed?” I’d say the market with the 58% forecast. They made a bad forecast and simply got lucky
It also won't work to see how well Intrade predictions fare over a large set of such one-off predictions and then simply assume that this accuracy applies to individual forecasts: If we found that, on average, Intrade predicts pretty well, that is consistent with any amount of sub-optimality for individual predictions, so long as the times the odds are too low are balanced by the times they are too high. (Even that, of course, assumes that it means something for a prediction for a unique event to be optimal.)
A bookie solves this puzzle quite simply, but in a question begging fashion for Sumner's purposes: the "optimal" forecast is the one that keeps him in the position of a risk-free collector of the vigorish. He has no concern at all about what odds are "optimal" in the sense of giving the best possible prediction of what will really happen.
Sumner approaches the problem by giving some reasons to think 80% was not an unreasonable guess. But that doesn't back a claim of optimality, but only the much weaker claim that the guess was not outlandish.
So is there a meaning to the claim that some prediction of some unique event is optimal? And if there is, is there a way to demonstrate that optimality?
This essay might interest you: http://www.phil.cam.ac.uk/~swb24/PAPERS/Allsoulsnight.htm
ReplyDeleteIn fact, it inspired me!
Delete