User:IssaRice/Belief propagation and cognitive biases: Difference between revisions
No edit summary |
No edit summary |
||
Line 8: | Line 8: | ||
* Hindsight bias seems like an evidence node misfiring and causing updates in the graph? See also https://www.lesswrong.com/posts/TiDGXt3WrQwtCdDj3/do-we-believe-everything-we-re-told | * Hindsight bias seems like an evidence node misfiring and causing updates in the graph? See also https://www.lesswrong.com/posts/TiDGXt3WrQwtCdDj3/do-we-believe-everything-we-re-told | ||
* Buckets error and flinching away from truth: I think you can formulate a probabilistic version of [https://www.greaterwrong.com/posts/EEv9JeuY5xfuDDSgF/flinching-away-from-truth-is-often-about-protecting-the/comment/D6WcJW4zpCT5WhG4T my comment] using bayes nets and belief prop. (in that case, there still may or may not be causality involved; i think all you need are the independence relationships.) | * Buckets error and flinching away from truth: I think you can formulate a probabilistic version of [https://www.greaterwrong.com/posts/EEv9JeuY5xfuDDSgF/flinching-away-from-truth-is-often-about-protecting-the/comment/D6WcJW4zpCT5WhG4T my comment] using bayes nets and belief prop. (in that case, there still may or may not be causality involved; i think all you need are the independence relationships.) | ||
** [http://mindingourway.com/see-the-dark-world/] the sour grapes/tolerification seems pretty similar, but the steps go like this: (1) initially, one has <math>X \implies Y</math> stored. (2) the world shows you <math>X</math> in a way that's undeniable (this is contrasted with the buckets error situation, where someone merely asserts/brings to attention <math>X</math>). (3) one does the modus ponens, obtaining <math>Y</math>. Here, <math>Y</math> is undesirable, but even more undesirable is <math>X \wedge \neg Y</math>, and by (2), we cannot deny <math>X</math>. So we pick the best of the bads and stick with <math>Y</math>. | |||
==possibly related== | ==possibly related== |
Revision as of 06:12, 5 September 2018
- "Several cognitive biases can be seen as confusion between probabilities and likelihoods, most centrally base-rate neglect." [1]
- confusing p-values with Pr(null hypothesis | data) seems like another instance of this.
- confidence interval vs credible interval also does this flipping about the conditional bar.
- I think a polytree graph like can illuminate the halo effect/horn effect
- Maybe https://en.wikipedia.org/wiki/Berkson%27s_paradox The page even says "The effect is related to the explaining away phenomenon in Bayesian networks."
- Fundamental attribution error? The simplified DAG would look like: situational influence → observed action ← personality. And the evidence feeds into the "observed action" node, which propagates upwards to the "situational influence" and "personality" nodes. I think the bias is that the "personality" node gets updated too much. Can belief propagation give insight into this?
- This one might be too simple, but the idea of screening off I think can be visualized in a Bayesian network. Not sure where the belief propagation would come in though... Related here are [2]/stereotyping.
- Hindsight bias seems like an evidence node misfiring and causing updates in the graph? See also https://www.lesswrong.com/posts/TiDGXt3WrQwtCdDj3/do-we-believe-everything-we-re-told
- Buckets error and flinching away from truth: I think you can formulate a probabilistic version of my comment using bayes nets and belief prop. (in that case, there still may or may not be causality involved; i think all you need are the independence relationships.)
- [3] the sour grapes/tolerification seems pretty similar, but the steps go like this: (1) initially, one has stored. (2) the world shows you in a way that's undeniable (this is contrasted with the buckets error situation, where someone merely asserts/brings to attention ). (3) one does the modus ponens, obtaining . Here, is undesirable, but even more undesirable is , and by (2), we cannot deny . So we pick the best of the bads and stick with .