Why So Many Statistical Studies Are Worthless


The findings of statistical studies are usually considered "significant" when there is smaller than 5% probability that their findings were the result of mere chance in the selection of a sample to study.

Keep that in mind, and let's first just consider sociologists: the American Sociological Association claims 21,000 members in its various sub-groups. Let us guess (the exact numbers don't matter for my point) that each member undertakes two statistical studies per year, and half of those show a significant correlation. That means that by chance alone, this group will produce over a thousand studies per year which appear to show a significant correlation between different phenomena, but in which the significance was really only the result of the luck of the draw in picking a sample to examine.

Next let us turn our attention to the bias that exists in academic journals towards results that are positive (no one cares much about studies that show no connection exists between sunspots and detergent purchases) and surprising (no one cares much about studies that show that rude people are annoying). This bias means that these false positives, since they are positive and often surprising, have a significantly greater likelihood of being published than do the other 20,000 studies the sociologists produce. And we can further add in the not-so-subtle pressure on academics to "publish or perish" that can consciously or unconsciously push them to manipulate their study to produce a publishable result.

Now think about this: let's say I do a study that does find a "significant" relationship between sunspots and detergent purchases. That's pretty darned surprising. I now have two choices: I can spend a couple of more years doing further studies to see if my result holds up; given it most likely won't, I wind up with nothing whatsoever to publish after three years of work. Or I can just pump out a paper on my initial study, put another publication on my CV, and go on to something else. Hmm, if I am trying to achieve tenure, which to choose, which to choose...?

Finally, throw in economists, and medical researchers, and political scientists, and psychologists, and education researchers, and anthropologists: it should be clear that the "literature" of statistical studies is awash in nonsense. One can mine it to prove pretty much anything one wants to prove. Sure, there will be some gems in the mud. But we all have limited time. That is why I suggest that it is only worth paying attention to studies that find the opposite of what the researcher set out to prove. In those cases, the researcher is likely to double and triple check his results, and we can have some confidence that here there really is a significant finding if he does finally publish this work.

Comments

Popular posts from this blog

Libertarians, My Libertarians!

"Pre-Galilean" Foolishness