User:IssaRice/Aumann's agreement theorem

From Machinelearning


Information partitions

Actually, can't the information states overlap? What if, when Bob rolls a 1, Alice is told "1 or 3" and when Bob rolls a 2 is told "2 or 3" (with the rest of the situation remaining the same)? Then it seems like and , which overlap.

Similarly in Geanakoplos's example on p. 261, in his example the agent is conveniently told whether the number is even or odd, giving a partition. But what if the agent is told whether the number is 1-7 or 2-8? It seems like this information states setup restricts the examples we can formalize, and the examples people consider seem to conveniently be formalizable.

In this case, if the real number happens to be 2-7, then the agent is told both "1-7" and "2-8", so he can deduce it is 2-7. Similarly, if he is told "1-7", then he can tell the number is 1, for if it wasn't 1, he would have also been told "2-8". So the information partition ends up being { {1}, {2,...,7}, {8} }. So a rational agent can make the revelation into a partition by deducing how the clue was communicated. A general version of this is probably some basic theorem in game theory or a related field, but I am highly ignorant.

Actually, the above operation seems to be to "refine" the "partition" until it becomes an actual partition. The coarsest partition such that each element of the partition is a subset of the original set.

Join and meet

Common knowledge

Statement of theorem

Hal Finney's example

Let Alice and Bob be two agents. Each rolls a die, and knows what they rolled. In addition to this, each knows whether the other rolled something in the range 1–3 versus 4–6. As an example, suppose Alice rolls a 2 and Bob rolls a 3. Then Alice knows that the outcome is one of (2,1), (2,2), or (2,3), and Bob knows that the outcome is one of (1,3), (2,3), or (3,3).

Given the above description, what are the possible states of knowledge of Alice?

In the possible worlds notation, if we let be Alice's information partition and be Bob's information partition, we would write:

Now let be an arbitrary event, and let be constants. We now define the event

One of the assumptions in the agreement theorem is that is common knowledge. This seems like a pretty strange requirement, since it seems like the posterior probability of can never change no matter what else the agents condition on in addition to . For example, what if we bring in agent 3 and make the posteriors common knowledge again?

What if we take and say that agent 1 knows ?

In the form of above, we can change to be any subset of and to be any numbers in . We can also set the state of the world to be any . The agreement theorem says that as we vary these parameters, if we ever find that , then we must have .

Define .

Explanation
(2, 3) 1 1 Given these parameters, so is common knowledge. This satisfies the requirement of the agreement theorem, and indeed 1=1.
(2, 3) 1/3 1/3 Given these parameters, so is common knowledge. This satisfies the requirement of the agreement theorem, and indeed 1/3=1/3.
(2, 3) 1/3 1/3 Given these parameters, , which is not a superset of , so is not common knowledge. Nonetheless, 1/3=1/3. (Is this a case of mutual knowledge that is not common knowledge?)

Alice knows she rolled a 2 and Bob rolled something between 1 and 3. Now, consider that Alice is additionally told that Bob did not roll a 1. Now Alice's posterior probability of the event is 1/2. How does this affect the agreement theorem? It seems like Alice's information partition changes; in particular, Bob's roll gets divided into . Bob's information partition remains the same, so now . Now for the event , and and the set is not a superset of so the agreement theorem doesn't apply. And indeed, that's good because .

Aumann's coin flip example

[1]

[2]

[3]

[4]

[5]

References