"Winner's Curse" in Publishing?

This is the kind of thing Gene would complain about, so I'm posting it here rather than at Free Advice. In this post, Tyler Cowen summarizes an article that claims:

In economic theory the winner’s curse refers to the idea that someone who places the winning bid in an auction may have paid too much...The same thing may be happening in scientific publishing, according to a new analysis. With so many scientific papers chasing so few pages in the most prestigious journals, the winners could be the ones most likely to oversell themselves—to trumpet dramatic or important results that later turn out to be false. This would produce a distorted picture of scientific knowledge, with less dramatic (but more accurate) results either relegated to obscure journals or left unpublished.


By itself, this is just goofy. There are probably a good 7 things wrong with this theory. (I confess I haven't read the paper, so maybe they address all 7.) Here are some big problems, some of which I thought of myself, and others which people at MR came up with:

* As with other nifty things, like the criterion of falsifiability, this idea above falls apart if you apply it to itself. Or, as I put it in the comments: "I have tried for years to get a paper published that says the refereeing process tends to select papers with true hypotheses, but no journal was interested."

* This really isn't the winner's curse. So even if it's true that "the most interesting papers are probably wrong"--which is how either Steve Landsburg or Bryan Caplan (it was one of the two but I can't remember which) phrased it several years ago--that's not really the winner's curse. As an MR comment explained: "[T]he winner's curse applied to papers would be that published papers present more information than was necessary to get published, IE they could have published 2 papers instead of the one. Therefore, I refuse to read the paper because I disagree with the conclusion from the snippet Tyler posted... Why does the extra information have to be incorrect? Doesn't the Journal check the validity of the information before it is published? (And I'm not talking about the economist.)"

* The guy's comment quoted above actually had two points, the second of which I handle here: Even given the tendency for referees to be more interested in "unexpected" results, why can't they be aware of that and scrutinize the paper more carefully? E.g. in the actual winner's curse literature, in Nash equilibrium rational bidders don't overbid. They are aware of the forces that would lead a group of naive people to consistently fall prey to the winner's curse, and so they adjust for it. So do economics referees not understand game theory? Maybe they don't, but the point is, you can't do a model of the refereeing process that yields this perverse outcome, unless you plug in that the referees (in the model) are morons. So yeah, no kidding if the referees are morons, then we can't trust the papers they publish.

* Finally, here's a great point someone brought up (and note he is quoting from the article in the beginning):

"Dr Ioannidis based his earlier argument about incorrect research partly on a study of 49 papers in leading journals that had been cited by more than 1,000 other scientists. They were, in other words, well-regarded research. But he found that, within only a few years, almost a third of the papers had been refuted by other studies. For the idea of the winner’s curse to hold, papers published in less-well-known journals should be more reliable; but that has not yet been established."

This sounds to me like

1) We don't know how likely papers in other journals are to be refuted, so we don't have a sense that the ones in highly regarded journals are *unusually* likely to be refuted;

2) Even if they are unusually likely to be refuted, I'd like to see an argument that it's because they're *worse*, not because the extra attention paid to them due to their place of publication has led to more efforts to replicate/refute their results.

Comments

Popular posts from this blog

Libertarians, My Libertarians!

"Machine Learning"

"Pre-Galilean" Foolishness