speaker: Pavel Temirchev
RL as Probabilistic Inference
What we will NOT discuss today?

Use of Bayesian Neural Networks (BNNs) for Exploration:

Bayesian model ensembling for modelbased RL:

Distributional RL

hey, we already discussed it


A lot of other interesting things...
What we WILL discuss
How to treat RL problem as a probabilistic inference problem?
\mathbb{E}_\pi \sum_{t=0}^T r(s_t, a_t) \rightarrow \max_\pi
Standard RL: optimization
Probabilistic Inference
A
B
p(AB) = \;?
WE
\pi(a_t s_t, \pi \;\text{is optimal})
may be something like this will do...
Why we will discuss it?

Treating RL as inference can help at using effective inference tools for solving RL problems.
We can develop new algorithms.

Bayesians are always try to generalize other's ideas.

As we will see, Inference has a close connection to
Maximum Entropy RL  may be it will help to improve exploration!
Background
Probabilistic Graphical Models
p(a, b, c, d, e)
Generally, a joint probability distribution
p(a, b, c, d, e) = p(ab, c, d, e) p(bc, d, e) p(cd, e) p(de) p(e)
can be factorized as follows:
Graphical representation of a probabilistic model can help
to embed structure into the model:
a
b
c
d
e
p(a, b, c, d, e) = p(a) p(b) p(ca, b) p(dc) p(ec)
Background
Inference on PGMs
Inference:
p(z_1) = \int p(z_1x_1) p(x_1) dx_1
p(x_2) = \int p(x_2x_1) p(x_1) dx_1
Graphical representation can help make probabilistic inference more easily.
There are a lot of algorithms for exact and approximate inference for PMGs
We will discuss very simple example of
Message Passing Algorithm on trees.
p(z_l) = \;? \;\;\; \forall l
Question:
Model:
p(x_{0:L}, z_{0:L}) = p(x_0)p(z_0x_0)\prod_l p(x_lx_{l1})p(z_lx_l)
z_0
x_0
z_1
x_1
z_L
x_L
\dots
p(z_0) = \int p(z_0x_0) p(x_0) dx_0
p(x_1) = \int p(x_1x_0) p(x_0) dx_0
p(z_l) = \int p(z_lx_l) p(x_l) dx_l
p(x_{l+1}) = \int p(x_{l+1}x_l) p(x_l) dx_l
Background
Bayes' Rule and KulbackLeibler divergence (KL)
Bayes' Rule: allows to calculate posterior distribution of a r.v.
given new data and a prior distribution
KulbackLeibler divergence is a measure of how one distribution is different from another, reference, distribution (not symmetric):
p(zx) =
\frac{p(xz) p(z)}{p(x)}
=
\frac{p(xz) p(z)}{\int p(xz)p(z)dz}
\text{KL}\Big( q(x)\; \big\big\; p(x) \Big) = \mathbb{E}_{x \sim q} \Big[ \log \frac{q(x)}{p(x)} \Big]
Background
Approximate probabilistic inference: Variational Inference (VI)
When applying Bayes' rule, the common situation
is intractability of the evidence term
p(x) = \int p(xz)p(z)dz
Hence, the exact posterior is intractable!
One way to go is to use approximate inference procedure called Variational Inference
p(zx)
\text{KL}\Big( q(z)\; \big\big\; p(zx) \Big) \rightarrow \min_{q \in \mathcal{Q}}
We want to minimize the dissimilarity between the true posterior
and our approximation  variational distribution
The search is in the chosen family of variational distributions
p(zx)
q(z)
q \in\mathcal{Q}
Background
Approximate probabilistic inference: Variational Inference (VI)
It can be shown that the defined minimization problem is closely related
to the maximization of some lower bound on the evidence
p(x) = \int p(xz)p(z)dz
We can rewrite the logarithm of the evidence as follows:
\log p(x) = \text{KL}\Big( q(z)\; \big\big\; p(zx) \Big) + \mathcal{L}(q)
Where
is socalled Evidence Lower Bound Objective (ELBO)
\mathcal{L}(q) =  \mathbb{E}_q \Big[\log q(z)  \log p(x, z) \Big]
(1)
The LHS of is independent of , whereas each term on the RHS is dependent.
(1)
q
Hence, the minimization of is equal to the maximization of ELBO
\text{KL}
\mathcal{L}(q)
\text{KL}\big( q\; \; p \big) \rightarrow \min_{q \in \mathcal{Q}}
It is your choice: either you want to minimize:
or you want to maximize:
\mathcal{L}(q) \rightarrow \max_{q \in \mathcal{Q}}
Background
RL Basics
Markov process:
p(\tau) = p(s_0) \prod_{t=0}^T p(a_ts_t) p(s_{t+1}s_t, a_t)
Maximization problem:
\pi^\star = \arg\max_\pi \sum_{t=0}^T \mathbb{E}_{s_t, a_t \sim \pi} [r(s_t, a_t)]
Q^\pi(s_t,a_t) := r(s_t,a_t) + \sum_{t'=t+1}^T \mathbb{E}_{s_{t'}, a_{t'} \sim \pi} [r(s_{t'}, a_{t'})]
Value functions (defined for policy):
Q^\star(s_t,a_t) = r(s_t,a_t) + \mathbb{E}_{s_{t+1}} V^\star(s_{t+1})
Bellman Optimality operator:
V^\star(s_t) = \max_a Q^\star(s_{t}, a)
Probabilistic Graphical Model
for MDP
a_0
s_0
a_1
s_1
a_2
s_2
V^\pi(s_t) = \mathbb{E}_a Q^\pi(s_{t}, a)
Generally, reward is a random variable:
r(s_t,a_t) = \mathbb{E} \big[ R(s_t, a_t) \big]
A heuristic for better exploration
Maximum entropy RL
a_t \sim \mathcal{N}(\cdot \pi^\star, \sigma^2)
Standard Policy Gradient:
a_t \sim \exp{Q(s_t, a_t)}
Policy "proportional" to Q:
How to find such a policy?
\min_\pi\text{KL}\Big(\pi(\cdots_0)\exp{Q(s_0, \cdot)}\Big) =
\max_\pi \mathbb{E}_\pi \Big[ Q(s_0, a_0)  \log \pi(a_0s_0) \Big] =
\max_\pi \mathbb{E}_\pi \Big[ \sum_t^T r(s_t, a_t) {\color{pink}+ \mathcal{H} \big( \pi(\cdots_0) \big)}\Big]
Q^\star(s_0, \cdot)
go left
go right
a_0
\exp Q^\star(s_0, \cdot)
\mathcal{N}(\cdot\arg\max Q^\star, \sigma^2)
It is very similar to the heuristic Maximum Entropy RL objective
\max_\pi \mathbb{E}_\pi \Big[ \sum_t^T r(s_t, a_t) {\color{pink}+ \mathcal{H} \big( \pi(\cdots_t) \big)}\Big]
During the lecture we will derive a probabilistic model inference on which results in Maximum Entropy RL objective
RL as Probabilistic Inference
Graphical Model with Optimality variables
a_0
s_0
a_1
s_1
\mathcal{O}_0
\mathcal{O}_1
p(\mathcal{O}_t =1 s_t, a_t) := p(\mathcal{O}_t s_t, a_t)
What if we would have binary optimality variables?
Let us look at the PGM for an MDP
\mathcal{O}_2
a_2
s_2
If then timestep was optimal.
\mathcal{O}_t = 1
t
Probability that the pair is optimal:
(s_t, a_t)
p(\mathcal{O}_t =1 s_t, a_t) := p(\mathcal{O}_t s_t, a_t) = \exp\big(r(s_t,a_t)\big)
But how we should define this probability?
Use exponentiation. Exponents are good.
Let us analyze the distribution of trajectories conditioned on optimality:
p(\tau\mathcal{O}_{0:T}) \propto p(\tau,\mathcal{O}_{0:T}) = p(s_0)\prod_{t=0}^Tp(a_ts_t)p(s_{t+1}s_t,a_t) \exp\big(r(s_t, a_t)\big)
p({\color{#00ff00}\tau}{\color{#ff0000}\mathcal{O}_{0:T}}) \propto p({\color{#00ff00}\tau},{\color{#ff0000}\mathcal{O}_{0:T}}) = {\color{#00ff00} p(s_0)\prod_{t=0}^Tp(a_ts_t)p(s_{t+1}s_t,a_t)} {\color{#ff0000}\,\exp\big(r(s_t, a_t)\big)}
RL as Probabilistic Inference
Exact inference for Optimal actions
p(a_ts_t, \mathcal{O}_{0:T}) = p(a_ts_t, \mathcal{O}_{t:T})
=
\frac{p(\mathcal{O}_{t:T}s_t, a_t) p(a_ts_t) p(s_t)}{p(\mathcal{O}_{t:T})}
\frac{p(\mathcal{O}_{t:T})}{p(\mathcal{O}_{t:T}s_t) p(s_t)}
here  some prior (noninformative) policy
p(a_ts_t)
if we set , then
the optimal policy is the following:
p(a_ts_t) = \frac{1}{\mathcal{A}}
p(a_ts_t, \mathcal{O}_{t:T}) \propto
We can now infer actions conditioned on optimality  optimal policy
\frac{ p(s_t, a_t\mathcal{O}_{t:T})}{ p(s_t\mathcal{O}_{t:T})}
(*)
is conditionally
independent of
given
due to the structure of PGM
(*)
a_t
\mathcal{O}_{0:t1}
s_t
=
=
\frac{ \color{#00ff00} p(s_t, a_t\mathcal{O}_{t:T})}{ \color{#ff0000} p(s_t\mathcal{O}_{t:T})}
let's apply Bayes rule!
\frac{p(\mathcal{O}_{t:T}s_t, a_t) p(a_ts_t) p(s_t)}{p(\mathcal{O}_{t:T})}\frac{p(\mathcal{O}_{t:T})}{p(\mathcal{O}_{t:T}s_t) p(s_t)}
p(a_ts_t, \mathcal{O}_{0:T})
\frac{p(\mathcal{O}_{t:T}s_t, a_t) }{p(\mathcal{O}_{t:T}s_t)}
Exact inference for optimal actions
Message Passing Algorithm
Let's introduce
new notation:
\alpha_t(s_t, a_t) := p(\mathcal{O}_{t:T}s_t, a_t)
\beta_t(s_t) := p(\mathcal{O}_{t:T}s_t) = \int \alpha_t(s_t, a_t) p(a_ts_t)da_t
We can find
all the and via
Message Passing algorithm:
\alpha_t
\beta_t
For the timestep :
T
\alpha_T(s_T, a_T) = \exp(r(s_T, a_T))
\beta_T(s_T) = \int \alpha_T(s_T, a_T) p(a_Ts_T)da_T
Recursively:
\alpha_t(s_t, a_t) = \int \beta_{t+1}(s_{t+1}) \exp(r(s_t, a_t)) p(s_{t+1}s_t, a_t)ds_{t+1}
\beta_t(s_t) = \int \alpha_t(s_t, a_t) p(a_ts_t)da_t
We want to compute and
for all
p(a_ts_t, \mathcal{O}_{t:T}) \propto
\frac{p(\mathcal{O}_{t:T}s_t, a_t) }{p(\mathcal{O}_{t:T}s_t)}
p(\mathcal{O}_{t:T}s_t, a_t)
p(\mathcal{O}_{t:T}s_t)
0 \le t \le T
Introducing \( Q^{soft} \) and \( V^{soft} \) functions
Logscale messages
Q^{soft}(s_t, a_t) := \log\alpha_t(s_t, a_t)
V^{soft}(s_t) := \log\beta_t(s_t)
Substituting into the recursive relation, we will obtain the following:
V^{soft}(s_t) =\log \mathbb{E}_{p(a_ts_t)} [\exp Q^{soft}(s_t, a_t)]
soft maximum
Q^{soft}(s_t, a_t) = r(s_t, a_t) + \log \mathbb{E}_{p(s_{t+1}s_t, a_t)} [\exp V^{soft}(s_{t+1})]
kinda Bellman equation
We can find analogues in the logscale:
approximates hard maximum with
Q^{soft}(s_t, a_t) \rightarrow \infty
Compare \( (Q^\star,\;V^\star) \) with \( (Q^{soft},\;V^{soft}) \)
Hard approach vs. Soft approach
"Hard" and functions:
V^\star(s_t) =\max_{a_t} Q^\star(s_t, a_t)
Q^\star(s_t, a_t) = r(s_t, a_t) + \mathbb{E}_{p(s_{t+1}s_t, a_t)} V^\star(s_{t+1})
V^{soft}(s_t) =\log \mathbb{E}_{p(a_ts_t)} [\exp Q^{soft}(s_t, a_t)]
Q^{soft}(s_t, a_t) = r(s_t, a_t) + \log \mathbb{E}_{p(s_{t+1}s_t, a_t)} [\exp V^{soft}(s_{t+1})]
"Soft" analogues:
Q^\star(s_t, a_t) = r(s_t, a_t) + \mathbb{E}_{p(s_{t+1}s_t, a_t)} \max_{a_{t+1}} Q^\star(s_{t+1}, a_{t+1})
Q^{soft}(s_t, a_t) \approx r(s_t, a_t) + \max_{s_{t+1}} \max_{a_{t+1}} Q^{soft}(s_{t+1}, a_{t+1})
\max_{s_{t+1}}
V^\star
Q^\star
Why we are so optimistic?
p(\tau\mathcal{O}_{0:T}) = p(s_0\mathcal{O}_{0:T})\prod_{t=0}^T{\color{#00ff00}p(a_ts_t,\mathcal{O}_{t:T})}p(s_{t+1}s_t,a_t,\mathcal{O}_{t+1:T})
p(\tau\mathcal{O}_{0:T}) = p(s_0{\color{#ff0000}\mathcal{O}_{0:T}})\prod_{t=0}^Tp(a_ts_t,\mathcal{O}_{t:T})p(s_{t+1}s_t,a_t,{\color{#ff0000}\mathcal{O}_{t+1:T}})
What we have done is the inference of the policy term
which was taken from the formula for optimal trajectories distribution:
But who are the neighbors of the policy?
This policy is optimal only in the presence of optimal dynamics!
Can we fix it?
Variational Inference
Approximate inference for achievable trajectories via VI
The trajectories are not really achievable
since they are based on the optimistic dynamics
Our policy , however, will be exploited with the prior dynamics:
And we want policy to produce trajectories ,
which are as close as possible to optimal trajectories
This is a Variational Inference problem:
\tau \sim p(\tau\mathcal{O}_{0:T})
p(s_{t+1}s_t, a_t, \mathcal{O}_{t+1:T})
\pi
q(\tau) = p(s_0)\prod_{t=0}^T \pi(a_ts_t)p(s_{t+1}s_t,a_t)
\pi
\tau \sim q(\tau)
\tau \sim p(\tau\mathcal{O}_{0:T})
\text{KL}\big(q(\tau)\;\;p(\tau\mathcal{O}_{0:T})\big)
\rightarrow \min_\pi
Variational Inference
Approximate inference for achievable trajectories via VI
Let us expand VI objective using the definition of KLdivergence:
\min_\pi \text{KL}\big(q(\tau)p(\tau\mathcal{O}_{0:T})\big) =  \min_\pi \mathbb{E}_q \log \frac{p(\tau,\;\mathcal{O}_{0:T})}{q(\tau)\;p(\mathcal{O}_{0:T})} = \max_\pi \mathbb{E}_q \log \frac{p(\tau,\;\mathcal{O}_{0:T})}{q(\tau)} + \text{const}
this is Maximum Entropy RL Objective
\max_\pi \mathbb{E}_q \log \frac{p(\tau,\;\mathcal{O}_{0:T})}{q(\tau)}= \max_\pi \mathbb{E}_q \Big[ \log p(s_0)+\sum_{t} \big( \log p(s_{t+1}s_t,a_t) + r(s_t, a_t) \big) 
 \log p(s_0)\sum_{t} \big( \log p(s_{t+1}s_t,a_t)  \log \pi(a_t s_t) \big) \Big]=
= \max_\pi \mathbb{E}_\pi \sum_{t}\Big[ r(s_t, a_t) + \mathcal{H}\big( \pi(\cdot s_t)\big) \Big]
Returning to \( Q^{soft} \) and \( V^{soft} \) functions
Riskneutral Soft approach
The objective from the previous slide can be rewritten as follows:
V^{soft}(s_t) =\log \int \exp Q^{soft}(s_t, a_t) da_t
Q^{soft}(s_t, a_t) = r(s_t, a_t) + \mathbb{E}_{p(s_{t+1}s_t, a_t)} V^{soft}(s_{t+1})
check it yourself!
\pi(a_ts_t) =\frac{\exp(Q^{soft}(s_t, a_t))}{\exp(V^{soft}(s_t))}
\sum_{t=0}^T\mathbb{E}_{s_t} \Big[ \text{KL}\Big(\pi(a_ts_t)\frac{\exp(Q^{soft}(s_t, a_t))}{\exp(V^{soft}(s_t))}\Big) + V^{soft}(s_t) \Big] \rightarrow \max_\pi
Hence, the optimal policy is:
but with a bit changed and functions:
 soft maximum
 normal Bellman equation
Q^{soft}
V^{soft}
RL as Inference with function approximators

Maximum Entropy Policy Gradients

Soft Qlearning
https://arxiv.org/abs/1702.08165 
Soft ActorCritic
https://arxiv.org/abs/1801.01290
Maximum Entropy Policy Gradients
RL as Inference with function approximators
\mathbb{E}_{\tau \sim \pi_\theta} \sum_{t=0}^T\Big[ r(s_t, a_t) + \mathcal{H}\big(\pi_\theta(\cdots_t)\big) \Big] \rightarrow \max_\theta
We can directly maximize entropyaugmented objective over policy parameters :
For gradients, use logderivative trick:
\sum_{t=0}^T\mathbb{E}_{(s_t,a_t) \sim q_\theta} \Big[ \nabla_\theta \log\pi_\theta(a_ts_t) \sum_{t'=t}^T\Big( r(s_{t'}, a_{t'}) \log\pi_\theta(a_{t'}s_{t'})  b(s_{t'}) \Big)\Big]
\theta

onpolicy

unimodal policies for continuous actions
Policy is parametrized with a neural network with parameters
\theta
\pi
\pi_\theta(as) = \mathcal{N}\Big(a\;\big\;\mu_\theta(s), \;\sigma^2 \Big)
Soft Qlearning
RL as Inference with function approximators
Train Qnetwork with parameters :
\phi
\mathbb{E}_{(s_t,a_t, s_{t+1}) \sim \mathcal{D}} \Big[ Q^{soft}_\phi(s_t, a_t)  \Big( r(s_t, a_t) + V^{soft}_\phi(s_{t+1})\Big) \Big]^2\rightarrow \min_\phi
use replay buffer
where
V^{soft}_\phi(s_t) =\log \int \exp Q^{soft}_\phi(s_t, a_t) da_t
for continuous actions use
Importance Sampling
Policy is implicit
\pi(a_ts_t) = \exp\big(Q^{soft}_\phi(s_t, a_t)  V^{soft}_\phi(s_t)\big)
for samples use
Stein Variational Gradient Descent
or MCMC :D
Soft Qlearning
Soft ActorCritic
RL as Inference with function approximators
Train and networks jointly with policy
\mathbb{E}_{(s_t,a_t, s_{t+1}) \sim \mathcal{D}} \Big[ Q^{soft}_\phi(s_t, a_t)  \Big( r(s_t, a_t) + V^{soft}_\psi(s_{t+1})\Big) \Big]^2\rightarrow \min_\phi
Qnetwork loss:
Vnetwork loss:
\hat{V}^{soft}(s_t) = \mathbb{E}_{a_t \sim \pi_\theta} \Big[ Q^{soft}_\phi(s_t, a_t)  \log\pi_\theta(a_ts_t) \Big]
\mathbb{E}_{s_t \sim \mathcal{D}} \Big[ \hat{V}^{soft}(s_t)  V^{soft}_\psi(s_{t}) \Big]^2 \rightarrow \min_\psi
Objective for the policy:
\mathbb{E}_{s_t \sim \mathcal{D}, \;a_t \sim \pi_\theta} \Big[ Q^{soft}_\phi(s_t, a_t) \log\pi_\theta(a_{t'}s_{t})\Big] \rightarrow \max_\theta
\mathbb{E}_{(s_t,a_t, s_{t+1}) \sim \mathcal{D}} \Big[ Q^{soft}_\phi(s_t, a_t)  \Big( r(s_t, a_t) + V^{soft}_\psi(s_{t+1})\Big) \Big]^2\rightarrow \min_\phi
Qnetwork loss:
Vnetwork loss:
\hat{V}^{soft}(s_t) = \mathbb{E}_{a_t \sim \pi_\theta} \Big[ Q^{soft}_\phi(s_t, a_t)  \log\pi_\theta(a_ts_t) \Big]
\mathbb{E}_{s_t \sim \mathcal{D}} \Big[ \hat{V}^{soft}(s_t)  V^{soft}_\psi(s_{t}) \Big]^2 \rightarrow \min_\psi
Objective for the policy:
\mathbb{E}_{s_t \sim \mathcal{D}, \;a_t \sim \pi_\theta} \Big[ Q^{soft}_\phi(s_t, a_t) \log\pi_\theta(a_{t'}s_{t})\Big] \rightarrow \max_\theta
Q^{soft}_\phi
V^{soft}_\psi
\pi_\theta
Soft ActorCritic
All is good. Stop?
Let us discuss simple MultiArmed Bandit problem
Not these bandits
All is good. Stop?
Let us discuss simple MultiArmed Bandit problem
\mathcal{S} = \empty
\mathcal{A} = \{1, \;2, \;\dots, \;N\}
Bandit №1
Bandit №2
Bandit №3
. . .
Bandit №N
COVID19
These bandits
We assume an epistemic uncertainty associated with MDP.
We can model it via sampling MDP from some prior distribution over MDPs:
\mathcal{M} = \{ M^+, M^\}
Sample and learn in episodes
of interaction
M \sim \mathcal{M}
L
a = 1 
a = 2 
a = 3 
... 
a = N 


REWARD
M^+
M^
1
1
2
2
1\epsilon
1\epsilon
1\epsilon
1\epsilon
1\epsilon
1\epsilon
MultiArmed Bandit example
How Soft Qlearning will deal with it?
\mathcal{S} = \empty
\mathcal{A} = \{1, \;2, \;\dots, \;N\}
We assume an epistemic uncertainty associated with MDP.
We can model it via sampling MDP from some prior distribution over MDPs:
\mathcal{M} = \{ M^+, M^\}
Sample and learn in episodes
of interaction
M \sim \mathcal{M}
L
a = 1 
a = 2 
a = 3 
... 
a = N 


REWARD
M^+
M^
1
1
2
2
1\epsilon
1\epsilon
1\epsilon
1\epsilon
1\epsilon
1\epsilon
Let us compute and :
Q^{soft}
V^{soft}
V^{soft}(s_t) =\log \sum_{a_t} \exp Q^{soft}(s_t, a_t)
Q^{soft}(s_t, a_t) = \mathbb{E}[R(s_t, a_t)] + \mathbb{E}_{p(s_{t+1}s_t, a_t)} V^{soft}(s_{t+1})
Q^{soft}(a) = \mathbb{E}[R(a)]
V^{soft} =\log \sum_a \exp Q^{soft}(a)
MultiArmed Bandit example
How Soft Qlearning will deal with it?
a = 1 
a = 2 
a = 3 
... 
a = N 


REWARD
M^+
M^
1
1
2
2
1\epsilon
1\epsilon
1\epsilon
1\epsilon
1\epsilon
1\epsilon
Let us compute for all actions
and :
Q^{soft}
V^{soft}
Q^{soft}(a) = \mathbb{E}[R(a)]
V^{soft} =\log \sum_a \exp Q^{soft}(a)
Q^{soft}
1
0
1\epsilon
1\epsilon
1\epsilon
\dots
a = 1
a = 2
a = 3
a = N
For
N = 3
V^{soft} =1.86
\pi(2) = 0.16
For
N = 10
V^{soft} =3.23
\pi(2) = 0.04
For
N = 100
V^{soft} =5.59
\pi(2) = 0.004
Reminder
Regret
\text{Regret}(M, \text{alg}, L) = \mathbb{E}_{\tau \sim M, \;\text{alg}} \Bigg[\sum_{l=0}^L \Bigg( V^\star(s_0^l)  \sum_{t=0}^T r(s_t^l, a_t^l) \Bigg) \Bigg]
In the case of epistemic uncertainty about the MDP, more general quantities are used.
The uncertainty is represented via a set of possible MDPs
(with associated probabilities of being in a concrete MDP )
\text{BayesRegret}(\phi, \text{alg}, L) = \mathbb{E}_{M \sim \mathcal{M}} \;\text{Regret}(M, \text{alg}, L)
\text{WorstCaseRegret}(\mathcal{M}, \text{alg}, L) = \max_{M \in \mathcal{M}}\;\text{Regret}(M, \text{alg}, L)
\mathcal{M}
\phi
L
\text{alg}
M
M
Regret is a measure of the suboptimality of an agent.
It depends on the number of episodes seen during learning.
Regret is defined for an algorithm and for an MDP
Drawbacks of "RL as inference" framework

Soft algorithms do not consider any epistemic uncertainty about the environment.
Nor they can resolve this uncertainty.

Soft algorithms has no guaranties on regret.
Moreover, they show either linear or exponential regret
on a toy problems

It is hard to tune the temperature parameter
Klearning
Variational Bayesian RL with Regret Bounds
One way to improve exploration of epistemic uncertainty is to
force agent to maximize not the expected return,
but the expected convex utility function of the return:
u(X)
We will discuss the exponential family of utility functions:
u(X) = \tau \exp(X / \tau)  1)
Certainty Equivalent Value is an amount of guarantied payoff,
that agent considers similarly to the random one:
C^X(\tau) = u^{1}(\mathbb{E}u(X)) = \tau \log \mathbb{E} \exp(X/\tau)
For exponential utility functions, certainty equivalent values are closely related to
the Cumulant Generative Function of a r.v.:
C^X(\tau) = \tau G^X(1/\tau)
Klearning
Variational Bayesian RL with Regret Bounds
LEMMA
The cumulant generating function of the posterior for the optimal Qvalues satisfies the following Bellman inequality
C^{\start}_{s_l, a_l} \le \tilde G^{\mut}_{s_l, a_l}(1/\tau_t)\; +\; \sum_{s_{l+1}} \mathbb{E}^t ( P_{s_{l+1}, s_l, a_l}) \tau_t \log \sum_{a_{l+1}} \exp \big( C^{\start}_{s_{l+1}, a_{l+1}} / \tau_t \big)
or, similarly
C^{\start}_{l} \le \mathcal{B}(\tau_t, C^{\start}_{l+1})
where
\tilde G^{\mut}_{s_l, a_l}(\beta) = G^{\mut}_{s_l, a_l}(\beta) + \frac{(Ll)^2 \beta^2}{2(n^t_{s_l, a_l}+1)}
Klearning
Variational Bayesian RL with Regret Bounds
Let us find a function, which will satisfy the inequality with equality.
We will call it Kvalue:
K^t_{l} = \mathcal{B}(\tau_t, K^t_{l+1})
And we define policy as follows:
\pi(a_ls_l) \propto \exp \big( K^t_{s_l, a_l} / \tau_t \big)
Then the regret is bounded:
Klearning
Variational Bayesian RL with Regret Bounds
Temperature at a fixed episode number can be found
as a solution of the following convex optimization problem:
Klearning
Variational Bayesian RL with Regret Bounds
For BANDITS
Deep Sea Environment
Klearning
Variational Bayesian RL with Regret Bounds
Bsuite
More links to the God of links
References
Soft Qlearning:
https://arxiv.org/pdf/1702.08165.pdf
Soft Actor Critic:
https://arxiv.org/pdf/1801.01290.pdf
Big Review on Probabilistic Inference for RL:
https://arxiv.org/pdf/1805.00909.pdf
Implementation on TensorFlow:
https://github.com/railberkeley/softlearning
Implementation on Catalyst.RL:
https://github.com/catalystteam/catalyst/tree/master/examples/rl_gym
Hierarchical policies (further reading):
Thank you for your attention!
Adv RL: RL as probabilistic inference
By cydoroga