User:IssaRice/Belief propagation and cognitive biases: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 9: Line 9:
* Buckets error and flinching away from truth: I think you can formulate a probabilistic version of [https://www.greaterwrong.com/posts/EEv9JeuY5xfuDDSgF/flinching-away-from-truth-is-often-about-protecting-the/comment/D6WcJW4zpCT5WhG4T my comment] using bayes nets and belief prop. (in that case, there still may or may not be causality involved; i think all you need are the independence relationships.)
* Buckets error and flinching away from truth: I think you can formulate a probabilistic version of [https://www.greaterwrong.com/posts/EEv9JeuY5xfuDDSgF/flinching-away-from-truth-is-often-about-protecting-the/comment/D6WcJW4zpCT5WhG4T my comment] using bayes nets and belief prop. (in that case, there still may or may not be causality involved; i think all you need are the independence relationships.)
** [http://mindingourway.com/see-the-dark-world/] the sour grapes/tolerification seems pretty similar, but the steps go like this: (1) initially, one has <math>X \implies Y</math> stored (example: X=grapes unreachable, Y=grapes sour). (2) the world shows you <math>X</math> in a way that's undeniable (this is contrasted with the buckets error situation, where someone merely asserts/brings to attention <math>X</math>). (3) one does the modus ponens, obtaining <math>Y</math>. Here, <math>Y</math> is undesirable (the world would be better with sweeter grapes!), but even more undesirable is <math>X \wedge \neg Y</math> (i.e. <math>\neg(X\implies Y)</math>, where the grapes are both sweet ''and'' unreachable), and by (2), we cannot deny <math>X</math>. So we pick the best of the undesirable choices and stick with <math>Y</math>.
** [http://mindingourway.com/see-the-dark-world/] the sour grapes/tolerification seems pretty similar, but the steps go like this: (1) initially, one has <math>X \implies Y</math> stored (example: X=grapes unreachable, Y=grapes sour). (2) the world shows you <math>X</math> in a way that's undeniable (this is contrasted with the buckets error situation, where someone merely asserts/brings to attention <math>X</math>). (3) one does the modus ponens, obtaining <math>Y</math>. Here, <math>Y</math> is undesirable (the world would be better with sweeter grapes!), but even more undesirable is <math>X \wedge \neg Y</math> (i.e. <math>\neg(X\implies Y)</math>, where the grapes are both sweet ''and'' unreachable), and by (2), we cannot deny <math>X</math>. So we pick the best of the undesirable choices and stick with <math>Y</math>.
:: And why is <math>X \wedge \neg Y</math> so undesirable? Because there is ''another'' implication, <math>(X \wedge \neg Y) \implies Z</math>, stored in your brain! And Z says "the world is intolerable". So to deny Z you must deny <math>X \wedge \neg Y</math>. This is still different from a buckets error, because the implication <math>(X \wedge \neg Y) \implies Z</math> is true.


==possibly related==
==possibly related==

Revision as of 06:27, 5 September 2018

  • "Several cognitive biases can be seen as confusion between probabilities and likelihoods, most centrally base-rate neglect." [1]
    • confusing p-values with Pr(null hypothesis | data) seems like another instance of this.
    • confidence interval vs credible interval also does this flipping about the conditional bar.
  • I think a polytree graph like XZY can illuminate the halo effect/horn effect
  • Maybe https://en.wikipedia.org/wiki/Berkson%27s_paradox The page even says "The effect is related to the explaining away phenomenon in Bayesian networks."
  • Fundamental attribution error? The simplified DAG would look like: situational influence → observed action ← personality. And the evidence feeds into the "observed action" node, which propagates upwards to the "situational influence" and "personality" nodes. I think the bias is that the "personality" node gets updated too much. Can belief propagation give insight into this?
  • This one might be too simple, but the idea of screening off I think can be visualized in a Bayesian network. Not sure where the belief propagation would come in though... Related here are [2]/stereotyping.
  • Hindsight bias seems like an evidence node misfiring and causing updates in the graph? See also https://www.lesswrong.com/posts/TiDGXt3WrQwtCdDj3/do-we-believe-everything-we-re-told
  • Buckets error and flinching away from truth: I think you can formulate a probabilistic version of my comment using bayes nets and belief prop. (in that case, there still may or may not be causality involved; i think all you need are the independence relationships.)
    • [3] the sour grapes/tolerification seems pretty similar, but the steps go like this: (1) initially, one has XY stored (example: X=grapes unreachable, Y=grapes sour). (2) the world shows you X in a way that's undeniable (this is contrasted with the buckets error situation, where someone merely asserts/brings to attention X). (3) one does the modus ponens, obtaining Y. Here, Y is undesirable (the world would be better with sweeter grapes!), but even more undesirable is X¬Y (i.e. ¬(XY), where the grapes are both sweet and unreachable), and by (2), we cannot deny X. So we pick the best of the undesirable choices and stick with Y.
And why is X¬Y so undesirable? Because there is another implication, (X¬Y)Z, stored in your brain! And Z says "the world is intolerable". So to deny Z you must deny X¬Y. This is still different from a buckets error, because the implication (X¬Y)Z is true.

possibly related