The following example pretty well describes the issue at hand.

Suppose I ask you show me a coin from your pocket and then ask you to flip it once. If it comes up heads, are you now willing to stake \$3 for a chance to win \$4 if among the next 1,000 flips at least 500 are heads? Or do you still think the coin is fair and you expect this bet on average would lose you \$1?This illustrates the difference between an uninformed prior and a strong prior. If you began with a very weak prior– that is, prior to flipping the coin you believed the bias of the coin was equally likely to be always coming up heads as always coming up tails as anywhere in between– then you might take the bet. If on the other hand your experience is such that you do not frequently wind up with heavily biased coins in your own pocket then you might not think much of that first coin flip.

I think Gelman’s point is that however useful you may find it to employ an uninformative prior to communicate science, that does not mean your personal prior is necessarily uninformed. Therefore you might not make personal decisions based on a posterior derived from an uniformed prior– or, presumably, any prior much different than your own.

I am of two minds about the implications for reporting. My instinct is that biases should be made clear and so perhaps personal priors should be used in place of uniformed. On the other hand, a strong prior may greatly reduce the power of the study. If I believe that 999 of 1,000 coins yield tails 99 times out of 100, then a single flip of heads will do little to convince me that the coin is not biased toward tails. Is it really useful for me to report that I believe the coin is almost certainly biased toward tails? Is that science or opinion? It seems therefore that the likelihood– rather than the posterior– is the important scientific result of the study. In this case, the likelihood is identical to the posterior derived from the uniform prior, so there is no choice between these two.

To take an example from my own research, consider the audit of the April 14, 2013 Venezuelan election. There, a very extensive audit– 53 percent of more than 39,000 voting machines– turned up zero discrepancies between the numbers on the machines and paper ballots counted by hand. What conclusions may be drawn from this *result*? If you have uninformed priors, this seems overwhelmingly to suggest that the election was free of any meaningful audit-detectable fraud. If you believe strongly that the election results were fraudulent in a manner detectable by the audit, then it seems more sensible that the audit *itself* was a fraud. That is, your conclusion respecting the election result depends on your priors regarding possible fraud in the audit as well as the election. Scientifically, the important conclusion is that the audit *results* were not consistent with detectable fraud.^{1}

Coming back to the original posts, then, it seems to me that Mayo missed Gelman’s point that priors do matter. I think Gelman is suggesting that relatively uninformed priors are reasonable for the basis of scientific reporting; in going beyond a study the reader must construct their own posteriors. To whatever extent possible, apply your Bayesian inference to your own priors rather than allowing someone else to insert their own.

**1 **To the point that a full audit of all voting machines would change nobody’s conclusions respecting the *election* results.^{↩}

## No comments:

## Post a Comment