Covariance: Difference between revisions

From Machinelearning
Line 4: Line 4:


* A lot of explanations of covariance say things like "if the covariance is high, then the two variables vary together, so that when one is higher than average the other is as well". That sounds more like conditional expectation though, specifically <math>\mathrm E(Y - \mathrm E(Y) \mid X - \mathrm E(X))</math>. Can we express covariance in terms of this conditional language?
* A lot of explanations of covariance say things like "if the covariance is high, then the two variables vary together, so that when one is higher than average the other is as well". That sounds more like conditional expectation though, specifically <math>\mathrm E(Y - \mathrm E(Y) \mid X - \mathrm E(X))</math>. Can we express covariance in terms of this conditional language?
* Explain difference (in units, range of values) with correlation.
* Explain difference (in units, range of values) with correlation. Can we get positive/negative/large/small covariance and negative/positive/small/large correlation?

Revision as of 21:14, 11 July 2018

The covariance between two random variables X and Y is defined as cov(X,Y)=E((XE(X))(YE(Y))).

Questions/things to explain

  • A lot of explanations of covariance say things like "if the covariance is high, then the two variables vary together, so that when one is higher than average the other is as well". That sounds more like conditional expectation though, specifically E(YE(Y)XE(X)). Can we express covariance in terms of this conditional language?
  • Explain difference (in units, range of values) with correlation. Can we get positive/negative/large/small covariance and negative/positive/small/large correlation?