Disappearance of sample space: Difference between revisions

From Machinelearning
(Created page with "In probability theory, the "orthodox approach" defines events, probability measure, random variable, etc., in terms of a sample space (often denoted <math>\Omega</math>). Howe...")
 
No edit summary
Line 1: Line 1:
In probability theory, the "orthodox approach" defines events, probability measure, random variable, etc., in terms of a sample space (often denoted <math>\Omega</math>). However, after a certain point, the sample space "disappears" or fades into the background.
In probability theory, the "orthodox approach" defines events, probability measure, random variable, etc., in terms of a sample space (often denoted <math>\Omega</math>). However, after a certain point, the sample space "disappears" or fades into the background.
<blockquote>At a certain point in most probability courses, the sample space is rarely mentioned anymore and we work directly with random variables. But you should keep in mind that the sample space is really there, lurking in the background.<ref name="wasserman">I think this is from Wasserman's ''All of Statistics''.</ref></blockquote>
<blockquote>'''Warning!''' We defined random variables to be mappings from a sample space <math>\Omega</math> to <math>\mathbb R</math> but we did not mention the sample space in any of the distributions above. As I mentioned earlier, the sample space often "disappears" but it is really there in the background. Let's construct a sample space explicitly for a Bernoulli random variable. Let <math>\Omega = [0,1]</math> and define <math>\mathbb P</math> to satisfy <math>\mathbb P([a,b]) = b-a</math> for <math>0\leq a \leq b \leq 1</math>. Fix <math>p \in [0,1]</math> and define <math display="block">X(\omega) = \begin{cases}1 & \omega \leq p \\ 0 & \omega > p.\end{cases}</math> Then <math>\mathbb P(X=1) = \mathbb P(\omega \leq p) = \mathbb P([0,p]) = p</math> and <math>P(X=0) = 1-p</math>. Thus, <math>X \sim \mathrm{Bernoulli}(p)</math>. We could do this for all the distributions defined above. In practice, we think of a random variable like a random number but formally it is a mapping defined on some sample space.<ref name="wasserman" /></blockquote>


==See also==
==See also==

Revision as of 05:47, 15 January 2019

In probability theory, the "orthodox approach" defines events, probability measure, random variable, etc., in terms of a sample space (often denoted ). However, after a certain point, the sample space "disappears" or fades into the background.

At a certain point in most probability courses, the sample space is rarely mentioned anymore and we work directly with random variables. But you should keep in mind that the sample space is really there, lurking in the background.[1]

Warning! We defined random variables to be mappings from a sample space to but we did not mention the sample space in any of the distributions above. As I mentioned earlier, the sample space often "disappears" but it is really there in the background. Let's construct a sample space explicitly for a Bernoulli random variable. Let and define to satisfy for . Fix and define

Then and . Thus, . We could do this for all the distributions defined above. In practice, we think of a random variable like a random number but formally it is a mapping defined on some sample space.[1]

See also

  1. 1.0 1.1 I think this is from Wasserman's All of Statistics.