User:IssaRice/Self-sampling assumption with large reference class in sleeping beauty

From Machinelearning

"Unlike SIA, SSA is dependent on the choice of reference class. If the agents in the above example were in the same reference class as a trillion other observers, then the probability of being in the heads world, upon the agent being told they are in the sleeping beauty problem, is ≈ 1/3, similar to SIA." [1]

I was confused about this point for a long time, and think I've finally figured it out, so here is my current understanding:

Let's say there are N observers in the world who don't take part in the sleeping beauty problem. (They exist prior to/independent from the experiment, so there are the same N observers in each of the heads and tails worlds.)

The probability of being the beauty in the heads world is then , and the probability of being the first beauty in the tails world is , and the probability of being the second beauty in the tails world is again .

Now, when beauty finds out she is in the sleeping beauty experiment, we must renormalize these probabilities. We must have , where is the normalizing constant. Solving for , we get .

The probability of heads is thus , which tends to 1/3 as N goes to infinity.


Alternatively, you can simplify the algebra by assuming N is large and making the approximations and .

We can do the same calculation using odds. We start with 1:1 odds for heads:tails since it's a fair coinflip. Then we updated based on the odds , which are the fraction of observers in the reference class in each world that are Beauties. So we end up with as the final odds, which means a probability of heads of as .