I got a clearer handle on exactly what was wrong with Bryan Caplan's post on Bayesian updating, and, indeed, the general application of Bayesian updating to scientific theories, while reading The Drunkard's Walk. On pages 110-111 Mlodinow discusses the original example Bayes used to illustrate his theory. You have a table onto which you can roll balls in such a way that they have any equal probability of arriving at any point on the surface. You roll out one ball. Now your job is determine how far that ball lies from the left-right axis. You do that by rolling more balls, seeing whether or not they lie to the right or the left of the original ball, and using Bayesian updating to refine your idea of where the original ball lies.
This works fine if your initial model of the situation you face was accurate; in this case, the table is unbiased and each ball toss is random. But it just doesn't apply to situations where what you ought to be doing is throwing out your model! For instance, let's say the first ball obviously lands far to the left side of the table, and yet every subsequent toss lands to the left of that spot. That doesn't mean you should decide a position that's clearly on the far left of the table is really on the far right! No, you should abandon your assumption that the table is unbiased. But you can't use Bayesian updating to do that -- at least not your original updating scheme -- since that was an assumption you used to set up the updating scheme in the first place.
So, within a well-understood context Bayesian updating is a good way to determine the values of variable parameters of the context. But it's not useful when you are trying to figure out just what your context is -- and that's what the entrpreneur, contra Caplan, is always trying to do.