User:IssaRice/Aumann's agreement theorem

From Machinelearning


Information partitions

Join and meet

Common knowledge

Statement of theorem

Hal Finney's example

Let Alice and Bob be two agents. Each rolls a die, and knows what they rolled. In addition to this, each knows whether the other rolled something in the range 1–3 versus 4–6. As an example, suppose Alice rolls a 2 and Bob rolls a 3. Then Alice knows that the outcome is one of (2,1), (2,2), or (2,3), and Bob knows that the outcome is one of (1,3), (2,3), or (3,3).

1 2 3 4 5 6
1
2

One of the assumptions in the agreement theorem is that is common knowledge. This seems like a pretty strange requirement, since it seems like the posterior probability of can never change no matter what else the agents condition on in addition to . For example, what if we bring in agent 3 and make the posteriors common knowledge again?

What if we take and say that agent 1 knows ?

In the form of above, we can change to be any subset of and to be any numbers in . We can also set the state of the world to be any . The agreement theorem says that as we vary these parameters, if we ever find that , then we must have .

Define .

Explanation
(2, 3) 1 1 Given these parameters, so is common knowledge. This satisfies the requirement of the agreement theorem, and indeed 1=1.
(2, 3) 1/3 1/3 Given these parameters, so is common knowledge. This satisfies the requirement of the agreement theorem, and indeed 1/3=1/3.
(2, 3) 1/3 1/3 Given these parameters, , which is not a superset of , so is not common knowledge. Nonetheless, 1/3=1/3. (Is this a case of mutual knowledge that is not common knowledge?)

Agent 1 knows he rolled a 2 and agent 2 rolled something between 1 and 3. Now, consider that agent 1 is additionally told that agent 2 did not roll a 1. Now agent 1's posterior probability of the event is 1/2. How does this affect the agreement theorem? It seems like agent 1's information partition changes...

Aumann's coin flip example

[1]

[2]

[3]

[4]

[5]

References