Disappearance of sample space: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
In probability theory, the "orthodox approach" defines events, probability measure, random variable, etc., in terms of a sample space (often denoted <math>\Omega</math>). However, after a certain point, the sample space "disappears" or fades into the background. | In probability theory, the "orthodox approach" (Kolmogorov approach?) defines events, probability measure, random variable, etc., in terms of a sample space (often denoted <math>\Omega</math>). However, after a certain point, the sample space "disappears" or fades into the background. | ||
<blockquote>At a certain point in most probability courses, the sample space is rarely mentioned anymore and we work directly with random variables. But you should keep in mind that the sample space is really there, lurking in the background.<ref name="wasserman">I think this is from Wasserman's ''All of Statistics''.</ref></blockquote> | <blockquote>At a certain point in most probability courses, the sample space is rarely mentioned anymore and we work directly with random variables. But you should keep in mind that the sample space is really there, lurking in the background.<ref name="wasserman">I think this is from Wasserman's ''All of Statistics''.</ref></blockquote> |
Revision as of 06:00, 15 January 2019
In probability theory, the "orthodox approach" (Kolmogorov approach?) defines events, probability measure, random variable, etc., in terms of a sample space (often denoted ). However, after a certain point, the sample space "disappears" or fades into the background.
At a certain point in most probability courses, the sample space is rarely mentioned anymore and we work directly with random variables. But you should keep in mind that the sample space is really there, lurking in the background.[1]
Warning! We defined random variables to be mappings from a sample space to but we did not mention the sample space in any of the distributions above. As I mentioned earlier, the sample space often "disappears" but it is really there in the background. Let's construct a sample space explicitly for a Bernoulli random variable. Let and define to satisfy for . Fix and define
Then and . Thus, . We could do this for all the distributions defined above. In practice, we think of a random variable like a random number but formally it is a mapping defined on some sample space.[1]
Now that we understand probability triples well, we discuss some additional essential ingredients of probability theory. Throughout Section 3 (and, indeed, throughout most of this text and most of probability theory in general), we shall assume that there is an underlying probability triple with respect to which all further probability objects are defined. This assumption shall be so universal that we will often not even mention it.[2]
In Klenke's book there is a theorem, 1.104, "For any distribution function F, there exists a real random variable X with F_X = F."
See also tao's two blog posts. [1] [2]
See also
External links
- https://math.stackexchange.com/questions/23006/the-role-of-the-hidden-probability-space-on-which-random-variables-are-defined?rq=1
- https://math.stackexchange.com/questions/1612012/how-should-i-understand-the-probability-space-omega-mathcalf-p-what-d
- https://math.stackexchange.com/questions/2531810/why-does-probability-theory-insist-on-sample-spaces?rq=1
- https://math.stackexchange.com/questions/712734/domain-of-a-random-variable-sample-space-or-probability-space?rq=1
- https://math.stackexchange.com/questions/18198/what-are-the-sample-spaces-when-talking-about-continuous-random-variables/18199#18199