User:IssaRice/Aumann's agreement theorem: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 1: Line 1:
Hal Finney's example
 
 
==Information partitions==
 
==Join and meet==
 
==Common knowledge==
 
==Statement of theorem==
 
==Hal Finney's example==


<math>\begin{array}{c|rrrrrr}
<math>\begin{array}{c|rrrrrr}
Line 44: Line 54:


Agent 1 knows he rolled a 2 and agent 2 rolled something between 1 and 3. Now, consider that agent 1 is additionally told that agent 2 did ''not'' roll a 1. Now agent 1's posterior probability of the event <math>X = 4</math> is 1/2. How does this affect the agreement theorem? It seems like agent 1's information partition changes...
Agent 1 knows he rolled a 2 and agent 2 rolled something between 1 and 3. Now, consider that agent 1 is additionally told that agent 2 did ''not'' roll a 1. Now agent 1's posterior probability of the event <math>X = 4</math> is 1/2. How does this affect the agreement theorem? It seems like agent 1's information partition changes...
==Aumann's coin flip example==


<ref>Tyrrell McAllister. [https://web.archive.org/web/20110725162431/http://dl.dropbox.com/u/34639481/Aumann_agreement_theorem.pdf "Aumann's agreement theorem"]. July 7, 2011.</ref>
<ref>Tyrrell McAllister. [https://web.archive.org/web/20110725162431/http://dl.dropbox.com/u/34639481/Aumann_agreement_theorem.pdf "Aumann's agreement theorem"]. July 7, 2011.</ref>

Revision as of 23:39, 24 August 2018


Information partitions

Join and meet

Common knowledge

Statement of theorem

Hal Finney's example

123456123456723456783456789456789105678910116789101112

1 2 3 4 5 6
1
2

E={ωΩ:Pr(AI(ω))=q1 and Pr(AJ(ω))=q2}

One of the assumptions in the agreement theorem is that E is common knowledge. This seems like a pretty strange requirement, since it seems like the posterior probability of A can never change no matter what else the agents condition on in addition to E. For example, what if we bring in agent 3 and make the posteriors common knowledge again?

What if we take E={ωΩ:Pr(AI(ω))=q1} and say that agent 1 knows E?

In the form of E above, we can change A to be any subset of Ω and q1,q2 to be any numbers in [0,1]. We can also set the state of the world to be any ωΩ. The agreement theorem says that as we vary these parameters, if we ever find that (IJ)(ω)E, then we must have q1=q2.

Define X((x,y))=x+y.

ω A q1 q2 Explanation
(2, 3) 2X6 1 1 Given these parameters, E=(IJ)(ω) so E is common knowledge. This satisfies the requirement of the agreement theorem, and indeed 1=1.
(2, 3) X=4 1/3 1/3 Given these parameters, E=(IJ)(ω) so E is common knowledge. This satisfies the requirement of the agreement theorem, and indeed 1/3=1/3.
(2, 3) X=4 1/3 1/3 Given these parameters, E={2,3}×{2,3}, which is not a superset of (IJ)(ω), so E is not common knowledge. Nonetheless, 1/3=1/3. (Is this a case of mutual knowledge that is not common knowledge?)

Agent 1 knows he rolled a 2 and agent 2 rolled something between 1 and 3. Now, consider that agent 1 is additionally told that agent 2 did not roll a 1. Now agent 1's posterior probability of the event X=4 is 1/2. How does this affect the agreement theorem? It seems like agent 1's information partition changes...

Aumann's coin flip example

[1]

[2]

[3]

[4]

[5]

References