Introduction to​
Reinforcement Learning

part 1 (probably the single one)

Tabular Value-based RL

 

 

made by Pavel Temirchev

 

Deep RL

reading group

 

Background

  • Basic Machine Learning (Linear Algebra, Probability, Stochastic Optimization, Neural Networks)
     
  • Basic Scientific Python (numpy, matplotlib)
     
  • Desirable, but not mandatory: PyTorch

Course Plan

Introduction, Tabular RL

Value-based RL: Q-learning

Policy-based RL: REINFORCE

Exploitation vs. Exploration dilemma

Oral Exam

(optional) RL as Probabilistic Inference

Lectures

Homework

Value Iteration

Playing games with DQN

- // - // - // -

Thoroughly read an article

- // - // - // -

- // - // - // -

Assessment criteria

In order to pass the course, students should:

 

  • Pass 2 out of 2 assignments
  • Pass the oral exam

Passing the exam:

  • (1 day before exam) - choose an article from a given list
  • Thoroughly describe the chosen article during the exam

Passing the assignments:

  • code up mandatory formulas
  • analyze the results for different initializations \ parameters \ models

What problems does RL solve?

Assume we have:

  • Environment (with its state)
  • Agent
  • Agent's actions within the environment

We want an agent to act optimally in the environment.

Forgotten something?

The measure of optimality - REWARD

s_t
r_t
a_t
s_{t+1}

Agent

Environment

s_t

Example 1: Robotics (baking pancakes)

Environment state:
 Agent's actions:
 Rewards:
  • coordinates x, y, z of all joints 
  • voltage applied to all joints

Pancake flipped +1

Pancake baked +10

Pancake failed -10

Example 2: Self-driving cars

Environment state:
 Agent's actions:
 Rewards:
  • images from the camera
  • measures from sensors
  • gas, break applied    

Destination point achieved +1

Rules break -1

Accident -10

Example 3: Chess

Environment state:
 Agent's actions:
 Rewards:
  • coordinates x, y of all figures
  • step applied for a figure

Win +1

Lose -1

Some formal definitions

s_t \in \mathcal{S}
a_t \in \mathcal{A}
r_t \in \mathbb{R}
s_{t+1} \sim p(s_{t+1}|s_t, a_t)
a_t \sim \pi(a_t|s_t)
r_t = r(s_{t+1}, s_t, a_t)

The set of the environment states

The set of agent's actions

The reward (is a scalar)

- transition probabilities

- agent's policy (behavior, strategy)

- reward function

Reminder: Tabular Definition of Functions

If the domain of function \(f\) is finite, then it can be written as a table:

f
x_0
x_1
x_2
x_3
x_4
f(x_0)
f(x_1)
f(x_2)
f(x_3)
f(x_4)
\mathcal{X}
x_0
x_1
\mathcal{X}
f(x_0, y_0)
f(x_1, y_0)
f(x_1, y_1)
f(x_0, y_1)
y_0
y_1
\mathcal{Y}
f

Markov Decision Processes (finite-time):

\tau = \{s_0, a_0, s_1, \dots, s_t, a_t, s_{t+1}, \dots, a_{T-1}, s_T \}
\mathbb{E}_{\tau \sim p(\tau)} \sum_{t=0}^T r_t \rightarrow \max_\pi
p(\tau) = \prod_{t=0}^T p(s_{t+1}|s_t, a_t) \pi(a_t|s_t)

Aim is to maximize expected cumulative reward:

Trajectory:

New states are dependent only on the previous state and action made

(not on the history!)

It is enough to make decisions solely using the current state (not the history!)

Markov Decision Processes (infinite-time):

\mathbb{E}_{\tau \sim p(\tau)} \sum_{t=0}^\infty r_t

What if the process is infinite?

T \rightarrow +\infty

Then

can be unbounded

Let's make it bounded again!

\mathbb{E}_{\tau \sim p(\tau)} \sum_{t=0}^\infty \gamma^t r_t < \infty

Introduce                                  - discount factor

0 \le \gamma < 1

Then, given bounded reward:

Discounted sum of rewards:

r_0 + \gamma r_1 + \gamma^2 r_2 + \gamma^3 r_3 + \dots

Cake today is better

then cake tomorrow

Encourages the agent to

get rewards faster!

Cake eating problem:

\mathcal{A} = \{ \text{eat}, \text{do nothing} \}
r(\text{eat}) = 1
r(\text{do nothing}) = 0

At which time-step you should eat the cake?

Episode terminates after eating

Why not Supervised Machine Learning?

Advertisement problem:

Need a dataset:

(s_0, a_0) \rightarrow p(click)
(s_0, a_1) \rightarrow p(click)
\dots
(s_N, a_N) \rightarrow p(click)

No way to take time dependencies into account!

And no dataset too!

State-value function (v-function)

V^\pi(s_t) = \mathbb{E}_{a_t} \mathbb{E}_{s_{t+1}} \Bigg[ r_t + \mathbb{E}_{a_{t+1}} \mathbb{E}_{s_{t+2}} \Big[ \gamma r_{t+1} + \mathbb{E}_{a_{t+2}} \mathbb{E}_{s_{t+3}}[\dots]\Big]\Bigg]
V^\pi(s_t) = \sum_{a_t} \pi(a_t|s_t) \sum_{s_{t+1}} p(s_{t+1}|s_t,a_t)\Bigg[ r_t + \dots \Bigg]
V^\pi(s_{T-1}) = \sum_{a_{T-1}} \pi(a_{T-1}|s_{T-1}) \sum_{s_T} p(s_T|s_{T-1},a_{T-1}) r_T
s_{T-1}
a_{T-1}
s_{T}
V^\pi(s_t) = \mathbb{E}_{a_t} \mathbb{E}_{s_{t+1}} \Bigg[ r_t + \gamma V^\pi(s_{t+1}) \Bigg]

Action-value function (q-function)

Q^\pi(s_t, a_t) = \mathbb{E}_{s_{t+1}} \Bigg[ r_t + \mathbb{E}_{a_{t+1}} \mathbb{E}_{s_{t+2}} \Big[ \gamma r_{t+1} + \mathbb{E}_{a_{t+2}} \mathbb{E}_{s_{t+3}}[\dots]\Big]\Bigg]
Q^\pi(s_{T-1}, a_{T-1}) = \sum_{s_T} p(s_T|s_{T-1},a_{T-1}) r_T
s_{T-1}
a_{T-1}
s_{T}
Q^\pi(s_t, a_t) = \mathbb{E}_{s_{t+1}} \Bigg[ r_t + \gamma \mathbb{E}_{a_{t+1}} Q^\pi(s_{t+1}, a_{t+1}) \Bigg]

Optimality Bellman Equation

Q^{\pi^*}(s_t, a_t) = \mathbb{E}_{s_{t+1}} \Big[ r_t + \gamma \max_a Q^{\pi^*}(s_{t+1}, a) \Big]

Theorem (kinda):

If the Q-function of a policy \(\pi^*\) for any state-action pair \( (s_t, a_t) \) is equal to:

then the policy \(\pi^*\) - is the optimal policy

and  \(Q^{\pi^*} = Q^*\) - is the optimal state-value function

If the optimal Q-function for any state-action pair \( (s_t, a_t) \) is known (it is just a table)

Then recovering optimal policy is easy: 

a_t = \arg\max_a Q^*(s_t, a)

Optimality Bellman Equation

Q^{\pi^*}(s_t, a_t) = \mathbb{E}_{s_{t+1}} \Big[ r_t + \gamma \max_a Q^{\pi^*}(s_{t+1}, a) \Big]

Theorem (kinda):

If the Q-function of a policy \(\pi^*\) for any state-action pair \( (s_t, a_t) \) is equal to:

then the policy \(\pi^*\) - is the optimal policy

and  \(Q^{\pi^*} = Q^*\) - is the optimal state-value function

If the optimal Q-function for any state-action pair \( (s_t, a_t) \) is known (it is just a table)

Then recovering optimal policy is easy: 

a_t = \arg\max_a Q^*(s_t, a)

Dynamic Programming

for Optimality Bellman Equation

The loop:

Q(s, a) \leftarrow 0 \;\;\;\; \forall(s, a)
Q(s, a) \leftarrow \mathbb{E}_{s'} \Big[ r + \gamma \max_{a'} Q(s', a') \Big]
while not convergent:
​   for      :
      for      :
s \in \mathcal{S}
a \in \mathcal{A}

Will converge to the optimal Q-function

What's wrong with this algorithm? Can you use it in practice?

It requires us to know the transition probabilities \( p(s'|s, a) \)

Q-learning

Dynamic Programming with Monte-Carlo sampling

REMINDER: Monte-Carlo estimate of an expectation

Q(s, a) \leftarrow 0 \;\;\;\; \forall(s, a)
Q(s_t, a_t) \leftarrow \alpha Q^{old}(s_t, a_t) + (1-\alpha) \big[r_t + \gamma \max_{a} Q(s_{t+1}, a) \big]

Will (with some dirty hacks) converge to the optimal Q-function

What's wrong with this algorithm? Can you use it in practice?

\mathbb{E}_{x\sim p(x)} f(x) \approx \frac{1}{N} \sum_{i=0}^N f(x_i),\;\;\;\; x_i \sim p(x)
while not convergent:
​   for t=0 to T:
a_t = \arg\max_a Q(s_t, a)
(r_t, s_{t+1}) \sim \texttt{environment}(a_t)
Q^{old}(s_t, a_t) \leftarrow Q(s_t, a_t)

will be discussed at the next slide

use exponential averaging as MC estimate

Q-learning

Exploration vs. Exploitation

Let's learn how to move forward:

x
0
r_t = x_{t+1} - x_t

Q-function estimate:

Q(s_0, a=\text{fall}) = 0
Q(s_0, a=\text{step}) = 0
\pi(a|s_0)=\arg\max_aQ(s_0, a) = \text{fall}
\Delta x
Q(s_0, a=\text{fall}) = \Delta x

Solution (\(\epsilon\)-greedy strategy): add noise to \( \pi \) :

make random action with probability \( \epsilon \)

Q-learning

Graphical representation of the learning loop

behave

collect

update \(Q\)-function

set optimal policy \(\pi\)

\pi(a|s)=\arg\max_aQ(s, a)

set behavioral policy \(\pi_b\)

\pi_b =
\{
\pi, \;\;\;\text{w.p.} \;\;(1-\epsilon)
rand, \;\;\;\text{w.p.} \;\;\epsilon

Q-learning example

Windy Gridworld Navigation Problem

wind

s_t = (x, y)
r(s_{goal}) = 1; \;\;\;\;0 \;\;else
\gamma = 0.99
\alpha = 0.9
\epsilon_0 = 1
\epsilon_{i+1} = 0.99\epsilon_i

Actions:

States:

Rewards:

Discounting:

Learning rate:

\(\epsilon\)-greedy strategy:

(decrease \(\epsilon\) after each episode)

Approximate Q-learning

(one step behind neural networks)

|\mathcal{A}|\cdot |\mathcal{S}| = 18\cdot 256^{192\times 160\times3} \approx 10^{185000} >> 10^{86}

Why do we need approximations? All is good, stop.

number of atoms in the Universe

For Atari Games:

DQN (Deep Q-Network)

Thank you for your attention!

and visit our seminars at

Deep RL Reading Group

telegram: https://t.me/theoreticalrl

Links (clickable):

for reading:

 

online course:

 

Intro to RL, lecture 1: Tabular RL (ISP)

By cydoroga

Intro to RL, lecture 1: Tabular RL (ISP)

  • 533