User:IssaRice/Belief propagation and cognitive biases

From Machinelearning
Revision as of 06:41, 5 September 2018 by IssaRice (talk | contribs)
  • "Several cognitive biases can be seen as confusion between probabilities and likelihoods, most centrally base-rate neglect." [1]
    • confusing p-values with Pr(null hypothesis | data) seems like another instance of this.
    • confidence interval vs credible interval also does this flipping about the conditional bar.
  • I think a polytree graph like can illuminate the halo effect/horn effect
  • Maybe https://en.wikipedia.org/wiki/Berkson%27s_paradox The page even says "The effect is related to the explaining away phenomenon in Bayesian networks."
  • Fundamental attribution error? The simplified DAG would look like: situational influence → observed action ← personality. And the evidence feeds into the "observed action" node, which propagates upwards to the "situational influence" and "personality" nodes. I think the bias is that the "personality" node gets updated too much. Can belief propagation give insight into this?
  • This one might be too simple, but the idea of screening off I think can be visualized in a Bayesian network. Not sure where the belief propagation would come in though... Related here are [2]/stereotyping.
  • Hindsight bias seems like an evidence node misfiring and causing updates in the graph? See also https://www.lesswrong.com/posts/TiDGXt3WrQwtCdDj3/do-we-believe-everything-we-re-told
  • Buckets error and flinching away from truth: I think you can formulate a probabilistic version of my comment using bayes nets and belief prop. (in that case, there still may or may not be causality involved; i think all you need are the independence relationships.)
    • [3] the sour grapes/tolerification seems pretty similar, but the steps go like this: (1) initially, one has stored (example: X=grapes unreachable, Y=grapes sour). (2) the world shows you in a way that's undeniable (this is contrasted with the buckets error situation, where someone merely asserts/brings to attention ). (3) one does the modus ponens, obtaining . Here, is undesirable (the world would be better with sweeter grapes!), but even more undesirable is (i.e. , where the grapes are both sweet and unreachable), and by (2), we cannot deny . So we pick the best of the undesirable choices and stick with .
And why is so undesirable? Because there is another implication, , stored in your brain! And Z says "the world is intolerable". So to deny Z you must deny . This is still different from a buckets error, because the implication is true.
I think a network for this situation looks like and . So it's still a DAG but there is now a loop. Or maybe is sufficient, i.e. the update on sourness only happens via the tolerability node.
There is something funny going on at the Z node, I think. Like it is failing to update, and sending the opposite message to Y or something. I'll need to work out the calculation to be sure.

possibly related