I've been thinking about a particular game of chance. Here is how the game is described in the article to which Gelman links:In this particular battle between physicists and economists, I’m taking the economists’ side. https://t.co/fFHGB4owld
— Andrew Gelman (@StatModeling) December 19, 2020
Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total. Since you’re just as likely to flip heads as tails, it would appear that you should, on average, come out ahead if you played enough times because your potential payoff each time is greater than your potential loss. In economics jargon, the expected utility is positive, so one might assume that taking the bet is a no-brainer.I do not agree. My problem with this “paradox” is that the game (as described) is not entirely clear. What is missing from this description?
Yet in real life, people routinely decline the bet. Paradoxes like these are often used to highlight irrationality or human bias in decision making. But to Peters, it’s simply because people understand it’s a bad deal.
- First, it starts the player with a \$100 bankroll. From where did this \$100 appear? Does the player buy in for \$100? Or do they buy in for only \$20?
- Second, the desciption reads “every time” the player flips heads, but how many times must the player flip? Is the player free to cash out at any time, or does the game never end? If the latter, then of course it is a bad deal— the player never has a chance to cash out winnings!
- Third, what happens when the game is over? Can the player buy in to play again?
Let us investigate some possible rules and evaluate the “deal”