
Lecture 10: Markov Decision Processes
Shen Shen
April 18, 2025
11am, Room 10-250
Intro to Machine Learning

Toddler demo, Russ Tedrake thesis, 2004
(Uses vanilla policy gradient (actor-critic))









Reinforcement Learning with Human Feedback
Outline
- Markov Decision Processes Definition, terminologies, and policy
-
Policy Evaluation
-
State Value Functions \(\mathrm{V}^{\pi}\)
-
Bellman recursions and Bellman equations
-
-
Policy Optimization
-
Optimal policies \(\pi^*\)
-
Optimal action value functions: \(\mathrm{Q}^*\)
-
Value iteration
-
Outline
- Markov Decision Processes Definition, terminologies, and policy
-
Policy Evaluation
-
State Value Functions \(\mathrm{V}^{\pi}\)
-
Bellman recursions and Bellman equations
-
-
Policy Optimization
-
Optimal policies \(\pi^*\)
-
Optimal action value functions: \(\mathrm{Q}^*\)
-
Value iteration
-
Markov Decision Processes
-
Research area initiated in the 50s by Bellman, known under various names:
-
Stochastic optimal control (Control theory)
-
Stochastic shortest path (Operations research)
-
Sequential decision making under uncertainty (Economics)
-
Reinforcement learning (Artificial intelligence, Machine learning)
-
-
A rich variety of accessible and elegant theory, math, algorithms, and applications. But also, considerable variation in notations.
-
We will use the most RL-flavored notations.
- (state, action) results in a transition \(\mathrm{T}\) into a next state:
-
Normally, we get to the “intended” state;
-
E.g., in state (7), action “↑” gets to state (4)
-
-
If an action would take Mario out of the grid world, stay put;
-
E.g., in state (9), “→” gets back to state (9)
-
-
In state (6), action “↑” leads to two possibilities:
-
20% chance to (2)
-
80% chance to (3).
-
-
Running example: Mario in a grid-world

- 9 possible states \(s\)
- 4 possible actions \(a\): {Up ↑, Down ↓, Left ←, Right →}

reward of (3, \(\downarrow\))
reward of \((3,\uparrow\))
reward of \((6, \downarrow\))
reward of \((6,\rightarrow\))
- (state, action) pairs give rewards:
- in state 3, any action gives reward 1
- in state 6, any action gives reward -10
- any other (state, action) pair gives reward 0

-
discount factor: a scalar that reduces the "worth" of rewards, depending on the timing Mario gets the rewards.
- e.g., say this factor is 0.9. then, for (3, \(\leftarrow\)) pair, Mario gets a reward of 1 at the start of the game; at the 2nd time step, a discounted reward of 0.9; at the 3rd time step, it is further discounted to \((0.9)^2\), and so on.


Mario in a grid-world, cont'd
- \(\mathcal{S}\) : state space, contains all possible states \(s\).
- \(\mathcal{A}\) : action space, contains all possible actions \(a\).
Markov Decision Processes - Definition and terminologies
In 6.390,
- \(\mathcal{S}\) and \(\mathcal{A}\) are small discrete sets, unless otherwise specified.
- \(\mathcal{S}\) : state space, contains all possible states \(s\).
- \(\mathcal{A}\) : action space, contains all possible actions \(a\).
- \(\mathrm{T}\left(s, a, s^{\prime}\right)\) : the probability of transition from state \(s\) to \(s^{\prime}\) when action \(a\) is taken.
Markov Decision Processes - Definition and terminologies
\(\mathrm{T}\left(7, \uparrow, 4\right) = 1\)
\(\mathrm{T}\left(9, \rightarrow, 9\right) = 1\)
\(\mathrm{T}\left(6, \uparrow, 3\right) = 0.8\)
\(\mathrm{T}\left(6, \uparrow, 2\right) = 0.2\)
In 6.390,
- \(\mathcal{S}\) and \(\mathcal{A}\) are small discrete sets, unless otherwise specified.
- \(s^{\prime}\) and \(a^{\prime}\) are short-hand for the next-timestep

- \(\mathcal{S}\) : state space, contains all possible states \(s\).
- \(\mathcal{A}\) : action space, contains all possible actions \(a\).
- \(\mathrm{T}\left(s, a, s^{\prime}\right)\) : the probability of transition from state \(s\) to \(s^{\prime}\) when action \(a\) is taken.
- \(\mathrm{R}(s, a)\) : reward, takes in a (state, action) pair and returns a reward.
Markov Decision Processes - Definition and terminologies
reward of \((3,\uparrow\))
reward of \((6,\rightarrow\))
\(\mathrm{R}\left(3, \uparrow \right) = 1\)
\(\mathrm{R}\left(6, \rightarrow \right) = -10\)
In 6.390,
- \(\mathcal{S}\) and \(\mathcal{A}\) are small discrete sets, unless otherwise specified.
- \(s^{\prime}\) and \(a^{\prime}\) are short-hand for the next-timestep
- \(\mathrm{R}(s, a)\) is deterministic and bounded.
- \(\mathcal{S}\) : state space, contains all possible states \(s\).
- \(\mathcal{A}\) : action space, contains all possible actions \(a\).
- \(\mathrm{T}\left(s, a, s^{\prime}\right)\) : the probability of transition from state \(s\) to \(s^{\prime}\) when action \(a\) is taken.
- \(\mathrm{R}(s, a)\) : reward, takes in a (state, action) pair and returns a reward.
- \(\gamma \in [0,1]\): discount factor, a scalar.
- \(\pi{(s)}\) : policy, takes in a state and returns an action.
The goal of an MDP is to find a "good" policy.
Markov Decision Processes - Definition and terminologies
In 6.390,
- \(\mathcal{S}\) and \(\mathcal{A}\) are small discrete sets, unless otherwise specified.
- \(s^{\prime}\) and \(a^{\prime}\) are short-hand for the next-timestep
- \(\mathrm{R}(s, a)\) is deterministic and bounded.
- \(\pi(s)\) is deterministic.
- \(a_t = \pi(s_t)\)
- \(r_t = \mathrm{R}(s_t,a_t)\)
Policy \(\pi(s)\)
Transition \(\mathrm{T}\left(s, a, s^{\prime}\right)\)
Reward \(\mathrm{R}(s, a)\)
time
a trajectory (aka, an experience, or a rollout), of horizon \(h\)
\(\quad \tau=\left(s_0, a_0, r_0, s_1, a_1, r_1, \ldots s_{h-1}, a_{h-1}, r_{h-1}\right)\)
initial state
all depends on \(\pi\)
- \(\operatorname{Pr}\left(s_t=s^{\prime} \mid s_{t-1}=s, a_{t-1}=a\right)=\mathrm{T}\left(s, a, s^{\prime}\right)\)
Policy \(\pi(s)\)
Transition \(\mathrm{T}\left(s, a, s^{\prime}\right)\)
Reward \(\mathrm{R}(s, a)\)
time
Starting in a given \(s_0\), how "good" is it to follow a policy \(\pi\) for \(h\) time steps?
One idea:
Policy \(\pi(s)\)
Transition \(\mathrm{T}\left(s, a, s^{\prime}\right)\)
Reward \(\mathrm{R}(s, a)\)
time
Starting in a given \(s_0\), how "good" is it to follow a policy \(\pi\) for \(h\) time steps?
But in
Mario game:
One idea:
if start at \(s_0=6\) and policy \(\pi(s) =\uparrow, \forall s\), i.e., always up
in 390, this expectation is only w.r.t. the transition probabilities \(\mathrm{T}\left(s, a, s^{\prime}\right)\)
\( h\) terms
Policy \(\pi(s)\)
Transition \(\mathrm{T}\left(s, a, s^{\prime}\right)\)
Reward \(\mathrm{R}(s, a)\)
time
Starting in a given \(s_0\), how "good" is it to follow a policy \(\pi\) for \(h\) time steps?
Outline
- Markov Decision Processes Definition, terminologies, and policy
-
Policy Evaluation
-
State Value Functions \(\mathrm{V}^{\pi}\)
-
Bellman recursions and Bellman equations
-
-
Policy Optimization
-
Optimal policies \(\pi^*\)
-
Optimal action value functions: \(\mathrm{Q}^*\)
-
Value iteration
-
Definition: For a given policy \(\pi(s),\) the state value functions
\(\mathrm{V}_h^\pi(s):=\mathbb{E}\left[\sum_{t=0}^{h-1} \gamma^t \mathrm{R}\left(s_t, \pi\left(s_t\right)\right) \mid s_0=s, \pi\right], \forall s, h\)
- value functions \(\mathrm{V}_h^\pi(s)\): the expected sum of discounted rewards, starting in state \(s,\) and follow policy \(\pi\) for \(h\) steps.
- horizon-0 values defined as 0.
- value is long-term, reward is short-term (one-time).
expanded form
\( h\) terms
evaluate the "\(\pi(s) = \uparrow\), for all \(s,\) i.e. the always \(\uparrow\)" policy
horizon \(h\) = 0: no step left
horizon \(h\) = 1: receive the rewards

states and
one special transition:
rewards
- \(\pi(s) = ``\uparrow",\ \forall s\)
- \(\gamma = 0.9\)
horizon \(h = 2:\)


\( 2\) terms inside
states and
one special transition:
rewards
- \(\pi(s) = ``\uparrow",\ \forall s\)
- \(\gamma = 0.9\)
action \(\uparrow\)
action \(\uparrow\)
action \(\uparrow\)

\( 2\) terms inside
horizon \(h = 2:\)

states and
one special transition:
rewards
- \(\pi(s) = ``\uparrow",\ \forall s\)
- \(\gamma = 0.9\)
action \(\uparrow\)
action \(\uparrow\)
action \(\uparrow\)
action \(\uparrow\)
action \(\uparrow\)
horizon \(h = 3:\)

states and
one special transition:
rewards
- \(\pi(s) = ``\uparrow",\ \forall s\)
- \(\gamma = 0.9\)
the immediate reward for taking the policy-prescribed action \(\pi(s)\) in state \(s\).
horizon-\(h\) value in state \(s\): the expected sum of discounted rewards, starting in state \(s\) and following policy \(\pi\) for \(h\) steps.
\((h-1)\) horizon future values at a next state \(s^{\prime}\)
sum up future values weighted by the probability of getting to that next state \(s^{\prime}\)
discounted by \(\gamma\)
Bellman Recursion








approaches infinity
\(|\mathcal{S}|\) many linear equations, one equation for each state
typically \(\gamma <1\) in MDP definition, motivated to make \(\mathrm{V}^{\pi}_{\infty}(s):=\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^t \mathrm{R}\left(s_t, \pi\left(s_t\right)\right) \mid s_0=s, \pi\right]\) finite.
Bellman Equations
If the horizon \(h\) goes to infinity
Bellman Recursion

finite-horizon Bellman recursions
infinite-horizon Bellman equations
Recall: For a given policy \(\pi(s),\) the (state) value functions
\(\mathrm{V}_h^\pi(s):=\mathbb{E}\left[\sum_{t=0}^{h-1} \gamma^t \mathrm{R}\left(s_t, \pi\left(s_t\right)\right) \mid s_0=s, \pi\right], \forall s, h\)
MDP
Policy evaluation
Quick summary
1. By summing \(h\) terms:
2. By leveraging structure:
Outline
- Markov Decision Processes Definition, terminologies, and policy
-
Policy Evaluation
-
State Value Functions \(\mathrm{V}^{\pi}\)
-
Bellman recursions and Bellman equations
-
-
Policy Optimization
-
Optimal policies \(\pi^*\)
-
Optimal action value functions: \(\mathrm{Q}^*\)
-
Value iteration
-
- An MDP has a unique optimal value \(\mathrm{V}_h^{*}({s})\).
- Optimal policy \(\pi^*\) might not be unique (think, e.g. symmetric world).
- For finite \(h\), optimal policy \(\pi^*_h\) depends on how many time steps left.
- When \(h \rightarrow \infty\), time no longer matters, i.e., there exists a stationary \(\pi^*\).
- Under optimal policy, recursion holds too
Optimal policy \(\pi^*\)
Definition: for a given MDP and a fixed horizon \(h\) (possibly infinite), \(\pi^*\) is an optimal policy if \(\mathrm{V}_h^{\pi^*}({s}) = \mathrm{V}_h^{*}({s})\geqslant \mathrm{V}_h^\pi({s})\) for all \(s \in \mathcal{S}\) and for all possible policy \(\pi\).
- One idea: enumerate over all \(\pi\), do policy evaluation, compare \(V^\pi,\) get \(\mathrm{V}^{*}(s)\)
- tedious, and even with \(\mathrm{V}^{*}(s)\)... not super clear how to act
How to search for an optimal policy \(\pi^*\)?
Definition: for a given MDP and a fixed horizon \(h\) (possibly infinite), \(\pi^*\) is an optimal policy if \(\mathrm{V}_h^{\pi^*}({s}) = \mathrm{V}_h^{*}({s})\geqslant \mathrm{V}_h^\pi({s})\) for all \(s \in \mathcal{S}\) and for all possible policy \(\pi\).



Outline
- Markov Decision Processes Definition, terminologies, and policy
-
Policy Evaluation
-
State Value Functions \(\mathrm{V}^{\pi}\)
-
Bellman recursions and Bellman equations
-
-
Policy Optimization
-
Optimal policies \(\pi^*\)
-
Optimal action value functions: \(\mathrm{Q}^*\)
-
Value iteration
-
Optimal state-action value functions \(\mathrm{Q}^*_h(s, a)\)
\(\mathrm{Q}^*_h(s, a)\): the expected sum of discounted rewards for
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps

recursively finding \(\mathrm{Q}^*_h(s, a)\)
\(\mathrm{Q}^*_h(s, a)\): the expected sum of discounted rewards for
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
Recall:
\(\gamma = 0.9\)
States and one special transition:
\(\mathrm{R}(s,a)\)

Let's consider \(\mathrm{Q}^*_2(3, \rightarrow)\)
- receive \(\mathrm{R}(3,\rightarrow)\)
\( = 1 + .9 \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(3, a^{\prime}\right)\)
- next state \(s'\) = 3, act optimally for the remaining one timestep
- receive \(\max _{a^{\prime}} \mathrm{Q}^*_{1}\left(3, a^{\prime}\right)\)
\( = 1.9\)
Recall:
\(\gamma = 0.9\)
States and one special transition:
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
\(\mathrm{Q}^*_2(3, \rightarrow) = \mathrm{R}(3,\rightarrow) + \gamma \max _{a^{\prime}} \mathrm{Q}^*_{1}\left(3, a^{\prime}\right)\)
\(\mathrm{Q}^*_h(s, a)\): the value for

Let's consider \(\mathrm{Q}^*_2(3, \uparrow)\)
- receive \(\mathrm{R}(3,\uparrow)\)
\( = 1 + .9 \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(3, a^{\prime}\right)\)
- next state \(s'\) = 3, act optimally for the remaining one timestep
- receive \(\max _{a^{\prime}} \mathrm{Q}^*_{1}\left(3, a^{\prime}\right)\)
\( = 1.9\)
Recall:
\(\gamma = 0.9\)
States and one special transition:
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
\(\mathrm{Q}^*_2(3, \uparrow) = \mathrm{R}(3,\uparrow) + \gamma \max _{a^{\prime}} \mathrm{Q}^*_{1}\left(3, a^{\prime}\right)\)
\(\mathrm{Q}^*_h(s, a)\): the value for

Let's consider \(\mathrm{Q}_2^*(3, \leftarrow)\)
- receive \(\mathrm{R}(3,\leftarrow)\)
\( = 1 + .9 \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(2, a^{\prime}\right)\)
- next state \(s'\) = 2, act optimally for the remaining one timestep
- receive \(\max _{a^{\prime}} \mathrm{Q}_{1}^*\left(2, a^{\prime}\right)\)
\( = 1\)
Recall:
\(\gamma = 0.9\)
States and one special transition:
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
\(\mathrm{Q}_2^*(3, \leftarrow) = \mathrm{R}(3,\leftarrow) + \gamma \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(2, a^{\prime}\right)\)
\(\mathrm{Q}^*_h(s, a)\): the value for

Let's consider \(\mathrm{Q}^*_2(3, \downarrow)\)
- receive \(\mathrm{R}(3,\downarrow)\)
\( = 1 + .9 \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(6, a^{\prime}\right)\)
- next state \(s'\) = 6, act optimally for the remaining one timestep
- receive \(\max _{a^{\prime}} \mathrm{Q}_{1}^*\left(6, a^{\prime}\right)\)
\( = -8\)
Recall:
\(\gamma = 0.9\)
States and one special transition:
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
\(\mathrm{Q}_2^*(3, \downarrow) = \mathrm{R}(3,\downarrow) + \gamma \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(2, a^{\prime}\right)\)
\(\mathrm{Q}^*_h(s, a)\): the value for
Recall:

\(\gamma = 0.9\)
States and one special transition:
- act optimally for one more timestep, at the next state \(s^{\prime}\)
- 20% chance, \(s'\) = 2, act optimally, receive \(\max _{a^{\prime}} \mathrm{Q}_{1}^*\left(2, a^{\prime}\right)\)
- 80% chance, \(s'\) = 3, act optimally, receive \(\max _{a^{\prime}} \mathrm{Q}_{1}^*\left(3, a^{\prime}\right)\)
\(= -10 + .9 [.2 \times 0+ .8 \times 1] = -9.28\)
- receive \(\mathrm{R}(6,\uparrow)\)
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
Let's consider \(\mathrm{Q}_2^*(6, \uparrow) \)
\(=\mathrm{R}(6,\uparrow) + \gamma[.2 \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(2, a^{\prime}\right)+ .8\max _{a^{\prime}} \mathrm{Q}_{1}^*\left(3, a^{\prime}\right)] \)
\(\mathrm{Q}^*_h(s, a)\): the value for

\(\mathrm{Q}_2^*(6, \uparrow) =\mathrm{R}(6,\uparrow) + \gamma[.2 \max _{a^{\prime}} \mathrm{Q}_{1}^*\left(2, a^{\prime}\right)+ .8\max _{a^{\prime}} \mathrm{Q}_{1}^*\left(3, a^{\prime}\right)] \)
in general
Recall:
\(\gamma = 0.9\)
States and one special transition:
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
\(\mathrm{Q}^*_h(s, a)\): the value for

Recall:
\(\gamma = 0.9\)
States and one special transition:
- starting in state \(s\),
- take action \(a\), for one step
- act optimally there afterwards for the remaining \((h-1)\) steps
what's the optimal action in state 3, with horizon 2, given by \(\pi_2^*(3)=?\)
in general
either up or right
\(\mathrm{Q}^*_h(s, a)\): the value for
Outline
- Markov Decision Processes Definition, terminologies, and policy
-
Policy Evaluation
-
State Value Functions \(\mathrm{V}^{\pi}\)
-
Bellman recursions and Bellman equations
-
-
Policy Optimization
-
Optimal policies \(\pi^*\)
-
Optimal action value functions: \(\mathrm{Q}^*\)
-
Value iteration
-
Given the recursion
- for \(s \in \mathcal{S}, a \in \mathcal{A}\) :
- \(\mathrm{Q}_{\text {old }}(\mathrm{s}, \mathrm{a})=0\)
- while True:
- for \(s \in \mathcal{S}, a \in \mathcal{A}\) :
- \(\mathrm{Q}_{\text {new }}(s, a) \leftarrow \mathrm{R}(s, a)+\gamma \sum_{s^{\prime}} \mathrm{T}\left(s, a, s^{\prime}\right) \max _{a^{\prime}} \mathrm{Q}_{\text {old }}\left(s^{\prime}, a^{\prime}\right)\)
- if \(\max _{s, a}\left|Q_{\text {old }}(s, a)-Q_{\text {new }}(s, a)\right|<\epsilon:\)
- return \(\mathrm{Q}_{\text {new }}\)
- \(\mathrm{Q}_{\text {old }} \leftarrow \mathrm{Q}_{\text {new }}\)
we can have an infinite horizon equation
Value Iteration
if run this block \(h\) times and break, then the returns are exactly \(\mathrm{Q}^*_h\)
\(\mathrm{Q}^*_{\infty}(s, a)\)
\(\mathrm{V}\) values vs. \(\mathrm{Q}\) values
- \(\mathrm{V}\) is defined over state space; \(\mathrm{Q}\) is defined over (state, action) space.
- \(\mathrm{V}_h^*({s})\) can be derived from \(\mathrm{Q}^*_h(s,a):\), and vise versa.
- \(\mathrm{Q}^*\) is easier to read "optimal actions" from.
- We care more about \(\mathrm{V}^{\pi}\) and \(\mathrm{Q}^*\)
\(\mathrm{V}_{h}^*(s)=\max_{a}\left[\mathrm{Q}^*_{h}(s, a)\right]\)






\(\mathrm{\pi}_{h}^*(s)=\arg\max_{a}\left[\mathrm{Q}^*_{h}(s, a)\right]\)



Summary
- Markov decision processes (MDP) is nice mathematical framework for making sequential decisions. It's the foundation to reinforcement learning.
- An MDP is defined by a five-tuple, and the goal is to find an optimal policy that leads to high expected cumulative discounted rewards.
- To evaluate how good a given policy \(\pi, \) we can calculate \(\mathrm{V}^{\pi}(s)\) via
- the summation over rewards definition
- Bellman recursion for finite horizon, equation for infinite horizon
- To find an optimal policy, we can recursively find \(\mathrm{Q}^*(s,a)\) via the value iteration algorithm, and then act greedily w.r.t. the \(\mathrm{Q}^*\) values.
Thanks!
We'd love to hear your thoughts.
6.390 IntroML (Spring25) - Lecture 10 Markov Decision Processes
By Shen Shen
6.390 IntroML (Spring25) - Lecture 10 Markov Decision Processes
- 172