Bellman equation derivation: Difference between revisions
No edit summary |
No edit summary |
||
| Line 15: | Line 15: | ||
Now we see how the law of total probability interacts with conditional expectation. Let <math>X</math> be a random variable. Then | Now we see how the law of total probability interacts with conditional expectation. Let <math>X</math> be a random variable. Then | ||
:<math>\mathbb E[X \mid A] = \sum_x x \cdot \Pr(X = x \mid A) = \sum_x x | :<math>\mathbb E[X \mid A] = \sum_x x \cdot \Pr(X = x \mid A) = \sum_x x \sum_j \Pr(X =x \mid C_j,A)\Pr(C_j\mid A)</math> | ||
Here the event <math>X=x</math> is playing the role of <math>B</math> in the statement of the conditional law of total probability. | Here the event <math>X=x</math> is playing the role of <math>B</math> in the statement of the conditional law of total probability. | ||
| Line 23: | Line 23: | ||
By definition, <math>v_\pi(s) = \mathbb E_\pi[G_t \mid S_t = s]</math>. Now rewrite <math>G_t = R_{t+1} + \gamma G_{t+1}</math> and use the linearity of expectation to get <math>\mathbb E_\pi[R_{t+1} \mid S_t=s] + \gamma\mathbb E_\pi[G_{t+1}\mid S_t = s]</math>. From here, we can work separately with <math>\mathbb E_\pi[R_{t+1} \mid S_t=s]</math> and <math>\mathbb E_\pi[G_{t+1}\mid S_t = s]</math> for a while. | By definition, <math>v_\pi(s) = \mathbb E_\pi[G_t \mid S_t = s]</math>. Now rewrite <math>G_t = R_{t+1} + \gamma G_{t+1}</math> and use the linearity of expectation to get <math>\mathbb E_\pi[R_{t+1} \mid S_t=s] + \gamma\mathbb E_\pi[G_{t+1}\mid S_t = s]</math>. From here, we can work separately with <math>\mathbb E_\pi[R_{t+1} \mid S_t=s]</math> and <math>\mathbb E_\pi[G_{t+1}\mid S_t = s]</math> for a while. | ||
Using the law of total probability, we have | Using the law of total probability while conditioning over actions, we have | ||
:<math>\mathbb E_\pi[R_{t+1} \mid S_t=s] = \sum_r r \cdot \Pr(R_{t+1}=r \mid S_t=s) = \sum_r r \ | :<math>\mathbb E_\pi[R_{t+1} \mid S_t=s] = \sum_r r \cdot \Pr(R_{t+1}=r \mid S_t=s) = \sum_r r \sum_a \Pr(R_{t+1}=r \mid A_t=a,S_t=s)\Pr(A_t=a \mid S_t=s)</math> | ||
Using the convention that <math>\Pr(A_t=a \mid S_t=s) = \pi(a\mid s)</math>, this becomes | |||
:<math>\mathbb E_\pi[R_{t+1} \mid S_t=s] = \sum_r r \sum_a \Pr(R_{t+1}=r \mid a,S_t=s)\pi(a\mid s)</math> | |||
Revision as of 01:09, 1 September 2019
Bellman equation for .
We want to show for all states .
The core idea of the proof is to use the law of total probability to go from marginal to conditional probabilities, and then invoke the Markov assumption.
The law of total probability states that if is an event, and are events that partition the sample space, then .
For fixed event with non-zero probability, the mapping is another valid probability measure. In other words, define by for all events . Now the law of total probability for states that . We also have
So the law of total probability states that .
Now we see how the law of total probability interacts with conditional expectation. Let be a random variable. Then
Here the event is playing the role of in the statement of the conditional law of total probability.
This is the basic trick of the proof; we keep conditioning on different things (actions, next states, rewards) and using the law of total probability.
By definition, . Now rewrite and use the linearity of expectation to get . From here, we can work separately with and for a while.
Using the law of total probability while conditioning over actions, we have
Using the convention that , this becomes