Intrinsic motivations for Bayesian agents

Martin Biehl

  1. Background to intrinsic motivations
  2. Formal framework for a Bayesian agent
    1. Perception-action loop
    2. Generative model
    3. Inference / prediction
    4. Action selection
  3. Some intrinsic motivations:
    1. Conditional entropy minimization
    2. Predictive information maximization
    3. Knowledge seeking
    4. Empowerment maximization
    5. Curiosity

Overview

Motivation is something that generates behavior for an agent (robot, living organism)

  • similar to reward function in reinforcement learning (RL)

Background on intrinsic motivations

Background on intrinsic motivations

Developmental robotics:

  • study developmental processes of infants
    • motor skill acquisition
    • language acquisition
  • implement similar processes in robots

Working definition compatible with Oudeyer (2008):

Motivation is intrinsic if it:

  • embodiment independent,
  • semantic free / information theoretic,

This includes the approach by Schmidhuber (2010):

Motivation is intrinsic if it

  • rewards improvement of some model quality measure.

Background on intrinsic motivations

Embodiment independent means it should work (without changes) for any form of agent:

 

Background on intrinsic motivations

and produce "worthwhile" behavior

Embodiment independent means it should work for any form of agent:

 

Background on intrinsic motivations

  • any number or kind of
    • sensors
    • actuators
  • rewired
    • sensors
    • actuators
       

this implies

  • cannot assume there is a special sensor whose value is to be maximized
  • so reward functions of MDPs, POMDPs, and standard RL are not available

 

Background on intrinsic motivations

Semantic free, information theoretic:

  • relations between sensors, actuators, and internal variables count
  • specific values don't


 

  • information theory quantifies relations
  • if \(f\) and \(g\) are bijective functions then $$\text{I}(X:Y)=\text{I}(f(X):g(Y))$$
  • so the values of \(X\) or \(Y\) can play no role in mutual information.

 

Background on intrinsic motivations

Background on intrinsic motivations

Another important but not defining feature is usually known from evolution:

 

Background on intrinsic motivations

Another important but not defining feature is usually known from evolution:

open endedness

 

The motivation should not vanish until the capacities of the agent are exhausted.

Background on intrinsic motivations

Other applications of intrinsic motivations:

  • sparse reward reinforcement learning problems
  • Human level and artificial general AI

Background on intrinsic motivations

Sparse reward reinforcement learning:

  • Add additional term rewarding model improvement / curiosity / control
  • when not obtaining reward this lets the agent find useful behaviour (hopefully)

Background on intrinsic motivations

AGI:

  • implement predictive model that continually improves through experience
  • implement action selection / optimization that chooses according to the prediction
  • drive learning with intrinsic motivation
  • no limit?
    • possibly hardware limit
    • ignore this here.

Background on intrinsic motivations

Advantages of intrinsic motivations

  • scalability :
    • no need to redesign reward function for different environments / agents
    • environment kind and size does not change reward function
    • agent complexity does not change reward function

Disadvantage:

  • often (very) hard to compute
  • too general, faster if available:
    • specifically designed (dense) reward
    • imitation learning

Examples:

  • hunger is not an intrinsic motivation
    • not embodiment (digestive system) independent
    • eating more doesn't improve our model of the world

Background on intrinsic motivations

Examples:

  • maximizing stored energy is closer to an intrinsic motivation
    • real world agents need energy but not virtual ones
    • doesn't directly improve the world model
    • but maybe indirectly
    • open ended?

Background on intrinsic motivations

Examples:

  • maximizing money is also close to an intrinsic motivation
    • but it only exists in some societies
    • may also indirectly improve our model
    • open ended?

Background on intrinsic motivations

Examples:

  • minimizing prediction error of the model is an intrinsic motivation
    • any agent that remembers its predictions can calculate the prediction error
    • reducing it improves the model (at least locally)

Background on intrinsic motivations

Background on intrinsic motivations

dark room problem

Examples:

  • minimizing prediction error of the model is an intrinsic motivation
    • any agent that remembers its predictions can calculate the prediction error
    • reducing it improves the model (at least locally)
    • not open ended

Solution for dark room problem

  • maximizing the decrease of the prediction error (prediction progress) is an intrinsic motivation
    • improves the predictions of the model in one area until more progress can be made in another
    • may be open ended

Background on intrinsic motivations

  1. Background to intrinsic motivations
  2. Formal framework for a Bayesian agent
    1. Perception-action loop
    2. Generative model
    3. Inference / prediction
    4. Action selection
  3. Some intrinsic motivations:
    1. Conditional entropy minimization
    2. Predictive information maximization
    3. Knowledge seeking
    4. Empowerment maximization
    5. Curiosity

Overview

2. Formal framework for a Bayesian agent

  1. Perception-action loop

Similar to reinforcement learning for POMDPs :

  • partially observable environment
  • unknown environment transition dynamics

But we assume no extrinsic reward

\(E\) : Environment state

\(S\) : Sensor state

\(A\) : Action

\(M\) : Agent memory state

\(\newcommand{\p}{\text{p}} \p(e_0)\) : initial distribution

\(\newcommand{\p}{\text{p}}\p(s|e)\) : sensor dynamics

\(\newcommand{\p}{\text{p}}\p(m'|s,a,m)\) : memory dynamics

\(\newcommand{\p}{\text{p}}\p(a|m)\) : action generation

\(\newcommand{\p}{\text{p}}\p(e'|a',e)\) : environment dynamics

2. Formal framework for a Bayesian agent

  1. Perception-action loop
\newcommand{\p}{\text{p}} \p(e_{0:T},s_{0:T},a_{1:T},m_{1:T}) = \left( \prod_{t=1}^T \p(a_t|m_t) \p(m_t|s_{t-1},a_{t-1},m_{t-1}) \p(s_t|e_t) \p(e_t|a_t,e_{t-1}) \right) \p(s_0|e_0) \p(e_0)
p(e0:T,s0:T,a1:T,m1:T)=(t=1Tp(atmt)p(mtst1,at1,mt1)p(stet)p(etat,et1))p(s0e0)p(e0)\newcommand{\p}{\text{p}} \p(e_{0:T},s_{0:T},a_{1:T},m_{1:T}) = \left( \prod_{t=1}^T \p(a_t|m_t) \p(m_t|s_{t-1},a_{t-1},m_{t-1}) \p(s_t|e_t) \p(e_t|a_t,e_{t-1}) \right) \p(s_0|e_0) \p(e_0)

Joint distribution until final time \(t=T\) :

2. Formal framework for a Bayesian agent

  1. Perception-action loop

Assumptions :

  • constant environment and sensor dynamics given $$\newcommand{\p}{\text{p}}\p(e_{t_1}|a_{t_1},e_{t_1-1})=\p(e_{t_2}|a_{t_2},e_{t_2-1})$$ $$\newcommand{\p}{\text{p}}\p(s_{t_1}|e_{t_1})=\p(s_{t_2}|e_{t_2})$$
  • perfect agent memory :  $$\newcommand{\pt}{{\prec t}}m_t := (s_\pt,a_\pt) := sa_\pt \Rightarrow \newcommand{\p}{\text{p}}\p(m'|s,a,m) $$

2. Formal framework for a Bayesian agent

  1. Perception-action loop

Left to be specified:

  • action generation mechanism \(\newcommand{\p}{\text{p}} \p(a|m)\)
    • takes current sensor-action history \(m_t=sa_{\prec t}\) to generate new action \(a_t\)

 

2. Formal framework for a Bayesian agent

  1. Perception-action loop
  • in short, action generation mechanism \(\newcommand{\p}{\text{p}} \p(a|m)\) at each \(t\):
    • takes new sensor-action history \(m_t=sa_{\prec t}\)
    • uses a parameterized model,
    • infers its parameters / updates its beliefs
    • predicts consequences of actions,
    • selects action according to preferences / motivation

 

2. Formal framework for a Bayesian agent

  1. Perception-action loop
  • Internal to the agent
  • For parameters write \(\Theta=(\Theta^1,\Theta^2,\Theta^3)\)
  • \(\xi=(\xi^1,\xi^2,\xi^3)\) are fixed hyperparameters that encode priors over the parameters

2. Formal framework for a Bayesian agent

2. Generative model

Model split up into three parts:

  1. sensor dynamics model \(\newcommand{\q}{\text{q}}\newcommand{\hs}{\hat{s}}\q(\hat{s}|\hat{e},\theta)\)
  2. environment dynamics model \(\newcommand{\q}{\text{q}}\newcommand{\ha}{\hat{a}}\q(\hat{e}'|\ha,\hat{e},\theta)\)
  3. initial environment distribution \(\newcommand{\q}{\text{q}}\newcommand{\hs}{\hat{s}}\q(\hat{e}|\theta)\)

2. Formal framework for a Bayesian agent

2. Generative model

Model split up into three parts:

  1. sensor dynamics model \(\newcommand{\q}{\text{q}}\newcommand{\hs}{\hat{s}}\q(\hat{s}|\hat{e},\theta)\)
  2. environment dynamics model \(\newcommand{\q}{\text{q}}\newcommand{\ha}{\hat{a}}\q(\hat{e}'|\ha,\hat{e},\theta)\)
  3. initial environment distribution \(\newcommand{\q}{\text{q}}\newcommand{\hs}{\hat{s}}\q(\hat{e}|\theta)\)

2. Formal framework for a Bayesian agent

2. Generative model

Model split up into three parts:

  1. sensor dynamics model \(\newcommand{\q}{\text{q}}\newcommand{\hs}{\hat{s}}\q(\hat{s}|\hat{e},\theta)\)
  2. environment dynamics model \(\newcommand{\q}{\text{q}}\newcommand{\ha}{\hat{a}}\q(\hat{e}'|\ha,\hat{e},\theta)\)
  3. initial environment distribution \(\newcommand{\q}{\text{q}}\newcommand{\hs}{\hat{s}}\q(\hat{e}|\theta)\)

2. Formal framework for a Bayesian agent

2. Generative model

2. Formal framework for a Bayesian agent

3. Inference / prediction

As time passes in the perception action loop

  • new sensor values \(S_t=s_t\) and actions \(A_t=a_t\) are generated
  • get stored in \(M_t=m_t=sa_{\prec t}\)

 

As time passes in the perception action loop

  • new sensor values \(S_t=s_t\) and actions \(A_t=a_t\) are generated
  • get stored in \(M_t=m_t=sa_{\prec t}\)

 

2. Formal framework for a Bayesian agent

3. Inference / prediction

So at \(t\) agent can plug \(m_t=sa_{\prec t}\) into model

  • updates the probability distribution to a posterior:
\newcommand{\tT}{{t:T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{{q}} \newcommand{\hT}{{\hat{T}}} \newcommand{\diff}{\text{d}} \q(\hs_\thT,\he_{0:\hT},\ha_\thT,\theta|sa_\pt,\xi)
q(s^t:T^,e^0:T^,a^t:T^,θsat,ξ)\newcommand{\tT}{{t:T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{{q}} \newcommand{\hT}{{\hat{T}}} \newcommand{\diff}{\text{d}} \q(\hs_\thT,\he_{0:\hT},\ha_\thT,\theta|sa_\pt,\xi)

2. Formal framework for a Bayesian agent

3. Inference / prediction

So at \(t\) agent can plug \(m_t=sa_{\prec t}\) into model

  • updates the probability distribution to a posterior:
\newcommand{\tT}{{t:T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{{q}} \newcommand{\hT}{{\hat{T}}} \newcommand{\diff}{\text{d}} \q(\hs_\thT,\he_{0:\hT},\ha_\thT,\theta|sa_\pt,\xi)
q(s^t:T^,e^0:T^,a^t:T^,θsat,ξ)\newcommand{\tT}{{t:T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{{q}} \newcommand{\hT}{{\hat{T}}} \newcommand{\diff}{\text{d}} \q(\hs_\thT,\he_{0:\hT},\ha_\thT,\theta|sa_\pt,\xi)

2. Formal framework for a Bayesian agent

3. Inference / prediction

                                        predicts consequences of assumed actions \(\blue{\hat{a}_{t:\hat{T}}}\) for relations between:

\newcommand{\tT}{{t:T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\hT}{{\hat{T}}} \newcommand{\diff}{\text{d}} \q(\hs_\thT,\he_{0:\hT},\theta|\ha_\thT,sa_\pt,\xi)
q(s^t:T^,e^0:T^,θa^t:T^,sat,ξ)\newcommand{\tT}{{t:T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\hT}{{\hat{T}}} \newcommand{\diff}{\text{d}} \q(\hs_\thT,\he_{0:\hT},\theta|\ha_\thT,sa_\pt,\xi)
  • parameters \(\red{\Theta}\)
  • latent variables \(\red{\hat{E}_{0:\hat{T}}}\)
  • future sensor values \(\red{\hat{S}_{t:\hat{T}}}\)

2. Formal framework for a Bayesian agent

3. Inference / prediction

Call \(\text{q}(\hat{s}_{t:\hat{T}},\hat{e}_{0:\hat{T}},\theta|\hat{a}_{t:\hat{T}},sa_{\prec t},\xi)\) the complete posteriors.  

  • This is the result of inference and a prediction at the same time

2. Formal framework for a Bayesian agent

3. Inference / prediction

2. Formal framework for a Bayesian agent

4. Action selection

  • define preferences / motivations as real valued functionals \(\mathfrak{M}\) of a complete posterior and a given sequence \(\hat{a}_{t:\hat{T}}\) of future actions:
\newcommand{\tT}{{t:T}}
\newcommand{\tT}{{t:T}}
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\mot}{\mathfrak{M}} \mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)
M(q(.,.,..,sat,ξ),a^t:T^)\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\mot}{\mathfrak{M}} \mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)
  • for a given complete posterior motivations imply optimal sequences:
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \ha^*_\thT(sa_\pt,\xi):=\argmax_{\ha_\thT} \mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)
a^t:T^(sat,ξ):=argmaxa^t:T^M(q(.,.,..,sat,ξ),a^t:T^)\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \ha^*_\thT(sa_\pt,\xi):=\argmax_{\ha_\thT} \mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\re}{\text{r}} \text{q}(\ha_\thT|sa_\pt,\xi):=\frac{1}{Z} e^{\mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)}
q(a^t:T^sat,ξ):=1ZeM(q(.,.,..,sat,ξ),a^t:T^)\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\re}{\text{r}} \text{q}(\ha_\thT|sa_\pt,\xi):=\frac{1}{Z} e^{\mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)}
  • and an according softmax policy:

Then either:

  • select the first action from the optimal sequence:
\newcommand{\tT}{{t:T}}
\newcommand{\tT}{{t:T}}
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\p}{\text{p}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \p(a_t|sa_\pt):=\delta_{\ha^*_t(sa_\pt,\xi)}(a_t)
p(atsat):=δa^t(sat,ξ)(at)\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\p}{\text{p}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \p(a_t|sa_\pt):=\delta_{\ha^*_t(sa_\pt,\xi)}(a_t)
  • or sample the first action from the softmax:
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\re}{\text{r}} \text{p}(a_t|sa_\pt):=\sum_{\ha_{t+1:\hT}} \frac{1}{Z} e^{\mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)}
p(atsat):=a^t+1:T^1ZeM(q(.,.,..,sat,ξ),a^t:T^)\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\re}{\text{r}} \text{p}(a_t|sa_\pt):=\sum_{\ha_{t+1:\hT}} \frac{1}{Z} e^{\mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT)}

2. Formal framework for a Bayesian agent

4. Action selection

  1. Background to intrinsic motivations
  2. Formal framework for a Bayesian agent
    1. Perception-action loop
    2. Generative model
    3. Inference / prediction
    4. Action selection
  3. Some intrinsic motivations:
    1. Conditional entropy minimization
    2. Predictive information maximization
    3. Knowledge seeking
    4. Empowerment maximization
    5. Curiosity

Overview

In reinforcement learning (RL) the motivation is given by the expected sum over all values of one particular sensor, the "reward sensor" \(s^r\).

\newcommand{\tT}{{t:T}}
\newcommand{\tT}{{t:T}}
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\mot}{{\mathfrak{M}}} \begin{aligned} \mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT):&= Q(\hat{a}_{t:\hat{T}},sa_{\prec t}) \\ :&= \sum_{\hs_\thT} \q(\hs^r_\thT|\ha_\thT,sa_\pt,\xi) \sum_{\tau=t}^{\hat{T}} \hs^r_\tau \end{aligned}
M(q(.,.,..,sat,ξ),a^t:T^):=Q(a^t:T^,sat):=s^t:T^q(s^t:T^ra^t:T^,sat,ξ)τ=tT^s^τr\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\mot}{{\mathfrak{M}}} \begin{aligned} \mot(\q(.,.,.|.,sa_\pt,\xi),\ha_\thT):&= Q(\hat{a}_{t:\hat{T}},sa_{\prec t}) \\ :&= \sum_{\hs_\thT} \q(\hs^r_\thT|\ha_\thT,sa_\pt,\xi) \sum_{\tau=t}^{\hat{T}} \hs^r_\tau \end{aligned}

3. Some intrinsic motivations

0. Reinforcement learning

3. Some intrinsic motivations

1. Conditional entropy maximization

Actions should lead to environment states expected to have precise sensor values (e.g. Friston, Parr et al., 2017):

\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \q(\he_\thT|\ha_\thT)= \int \sum_{\hs_\thT,\he_\pt} \q(\hs_\thT,\he_{0:\hT},\theta|\ha_\thT) \diff \theta
q(e^t:T^a^t:T^)=s^t:T^,e^tq(s^t:T^,e^0:T^,θa^t:T^)dθ\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \q(\he_\thT|\ha_\thT)= \int \sum_{\hs_\thT,\he_\pt} \q(\hs_\thT,\he_{0:\hT},\theta|\ha_\thT) \diff \theta
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.),\ha_\thT) :&=-\HS_{\q}(\hS_\thT|\hE_\thT,\ha_\thT)\\ &= \sum_{\he_\thT} \q(\he_\thT|\ha_\thT) \sum_{\hs_\thT} \q(\hs_\thT|\he_\thT) \log \q(\hs_\thT|\he_\thT)\\ \end{aligned}
M(q(.,.,..),a^t:T^):=Hq(S^t:T^E^t:T^,a^t:T^)=e^t:T^q(e^t:T^a^t:T^)s^t:T^q(s^t:T^e^t:T^)logq(s^t:T^e^t:T^)\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.),\ha_\thT) :&=-\HS_{\q}(\hS_\thT|\hE_\thT,\ha_\thT)\\ &= \sum_{\he_\thT} \q(\he_\thT|\ha_\thT) \sum_{\hs_\thT} \q(\hs_\thT|\he_\thT) \log \q(\hs_\thT|\he_\thT)\\ \end{aligned}

Get \(\text{q}(\hat{e}_{t:\hat{T}}|\hat{a}_{t:\hat{T}})\) frome the complete posterior:

  • random noise sources are avoided
  • will get stuck in known "dark room traps"
    • we know $$\text{H}_{\text{q}}(\hat{S}_{t:\hat{T}}|\hat{a}_{t:\hat{T}})=0\Rightarrow\text{H}_{\text{q}}(\hat{S}_{t:\hat{T}}|\hat{E}_{t:\hat{T}},\hat{a}_{t:\hat{T}})=0$$
    • such an optimal action sequence \(\hat{a}_{t:\hat{T}}\) exists e.g. if there is a "dark room" in the environment
    • even if room cannot be escaped once entered and the agent knows it!

3. Some intrinsic motivations

1. Conditional entropy maximization

\newcommand{\hT}{{\hat{T}}} \newcommand{\thT}{{t:\hT}} \newcommand{\hTa}{{\hat{T}_a}} \newcommand{\thTa}{{t:\hTa}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\d}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\I}{\text{I}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \newcommand{\hA}{\hat{A}} \begin{aligned} \mot^{PI}(\d(.,.,.|.),\ha_\thT) :&= \I_{\q}(\hS_{t:t+k-1}:\hS_{t+k:t+2k-1}|\ha_\thT)\\ &=\sum_{\hs_{t:t+2k-1}} \q(\hs_{t:t+2k-1}|\ha_\thT) \log \frac{\q(\hs_{t+k:t+2k-1}|\hs_{t:t+k-1},\ha_\thT)}{\q(\hs_{t:t+k-1}|\ha_\thT)}\\ \end{aligned}
MPI(q(.,.,..),a^t:T^):=Iq(S^t:t+k1:S^t+k:t+2k1a^t:T^)=s^t:t+2k1q(s^t:t+2k1a^t:T^)logq(s^t+k:t+2k1s^t:t+k1,a^t:T^)q(s^t:t+k1a^t:T^)\newcommand{\hT}{{\hat{T}}} \newcommand{\thT}{{t:\hT}} \newcommand{\hTa}{{\hat{T}_a}} \newcommand{\thTa}{{t:\hTa}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\d}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\I}{\text{I}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \newcommand{\hA}{\hat{A}} \begin{aligned} \mot^{PI}(\d(.,.,.|.),\ha_\thT) :&= \I_{\q}(\hS_{t:t+k-1}:\hS_{t+k:t+2k-1}|\ha_\thT)\\ &=\sum_{\hs_{t:t+2k-1}} \q(\hs_{t:t+2k-1}|\ha_\thT) \log \frac{\q(\hs_{t+k:t+2k-1}|\hs_{t:t+k-1},\ha_\thT)}{\q(\hs_{t:t+k-1}|\ha_\thT)}\\ \end{aligned}

3. Some intrinsic motivations

2. Predictive information maximization

Actions should lead to the most complex sensor stream:

  • Next \(k\) sensor values should have max mutual information with the subsequent \(k\).
  • Proposed by Ay et. al (2008)

3. Some intrinsic motivations

2. Predictive information maximization

  • random noise source are avoided as they produce no mutual information
  • will not get stuck in known "dark room traps"
    • from $$\text{H}_{\text{q}}(\hat{S}_{t+k:t+2k-1}|\hat{a}_{t:\hat{T}})=0\Rightarrow\text{I}_{\text{q}}(\hat{S}_{t:t+k-1},\hat{S}_{t+k:t+2k-1}|\hat{a}_{t:\hat{T}})=0$$
  • possible long term behavior:
    • ergodic sensor process
    • finds a subset of environment states that allows this ergodicity

3. Some intrinsic motivations

2. Predictive information maximization

Georg Martius, Ralf Der

\newcommand{\hT}{{\hat{T}}} \newcommand{\thT}{{t:\hT}} \newcommand{\hTa}{{\hat{T}_a}} \newcommand{\thTa}{{t:\hTa}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\d}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\I}{\text{I}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \newcommand{\hA}{\hat{A}} \begin{aligned} \mot^{KSA}(\q(.,.,.|.),\ha_\thT) :&= \I_{\q}(\hS_\thT:\hE_{0:\hT},\Theta|\ha_\thT)\\ &=\sum_{\hs_\thT} \int \q(\hs_\thT,\he_{0:\hT},\theta|\ha_\thT) \log \frac{\q(\he_{0:\hT},\theta|\hs_\thT,\ha_\thT)}{\q(\theta)} \diff \theta \end{aligned}
MKSA(q(.,.,..),a^t:T^):=Iq(S^t:T^:E^0:T^,Θa^t:T^)=s^t:T^q(s^t:T^,e^0:T^,θa^t:T^)logq(e^0:T^,θs^t:T^,a^t:T^)q(θ)dθ\newcommand{\hT}{{\hat{T}}} \newcommand{\thT}{{t:\hT}} \newcommand{\hTa}{{\hat{T}_a}} \newcommand{\thTa}{{t:\hTa}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\d}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\I}{\text{I}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \newcommand{\hA}{\hat{A}} \begin{aligned} \mot^{KSA}(\q(.,.,.|.),\ha_\thT) :&= \I_{\q}(\hS_\thT:\hE_{0:\hT},\Theta|\ha_\thT)\\ &=\sum_{\hs_\thT} \int \q(\hs_\thT,\he_{0:\hT},\theta|\ha_\thT) \log \frac{\q(\he_{0:\hT},\theta|\hs_\thT,\ha_\thT)}{\q(\theta)} \diff \theta \end{aligned}

3. Some intrinsic motivations

3. Knowledge seeking

Actions should lead to sensor values that tell the most about hidden (environment) variables \(\hat{E}_{0:\hat{T}}\) and model parameters \(\Theta\):

  • Also known as information gain maximization
  • Special cases occur in active inference literature (Friston, 2016) and in Orseau (2013)
  • Goes back at least to Lindley (1956) (thanks Thomas Parr!),

3. Some intrinsic motivations

3. Knowledge seeking

  • avoids random noise sources once they are known
  • similar to prediction progress
  • can rewrite as $$\text{H}_{\text{q}}(\hat{E}_{0:\hat{T}},\Theta|\hat{a}_{t:\hat{T}})-\text{H}_{\text{q}}(\hat{E}_{0:\hat{T}},\Theta|\hat{S}_{t:\hat{T}},\hat{a}_{t:\hat{T}})$$
  • will not get stuck in known "dark room traps"
    • from $$\text{H}_{\text{q}}(\hat{S}_{t:\hat{T}}|\hat{a}_{t:\hat{T}})=0\Rightarrow\text{I}_{\text{q}}(\hat{S}_{t:\hat{T}}:\hat{E}_{0:\hat{T}},\Theta|\hat{a}_{t:\hat{T}})=0$$
  • technical results exist by Orseau et al. (2013) (off-policy prediction!)
  • possible long term behavior:
    • when model is known does nothing / random walk

3. Some intrinsic motivations

3. Knowledge seeking

Bellemare et al. (2016)

Bellemare, M. G., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. (2016). Unifying Count-Based Exploration and Intrinsic Motivation. arXiv:1606.01868 [cs]. arXiv: 1606.01868.

 

\newcommand{\hT}{{\hat{T}}} \newcommand{\thT}{{t:\hT}} \newcommand{\hTa}{{\hat{T}_a}} \newcommand{\thTa}{{t:\hTa}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\d}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\I}{\text{I}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \newcommand{\hA}{\hat{A}} \begin{aligned} \mot^{EM}(\d(.,.,.|.),\ha_\thTa) :&= \max_{\d(\ha_{\hTa+1:\hT})} \; \I_{\d}(\hA_{\hTa+1:\hT}:\hS_\hT|\ha_\thTa) \\ &=\max_{\d(\ha_{\hTa+1:\hT})} \; \sum_{\ha_{\hTa+1:\hT},\hs_\hT} \d(\ha_{\hTa+1:\hT}) \d(\hs_\hT|\ha_\thT) \log \frac{\d(\hs_\hT|\ha_\thT)}{\d(\hs_\hT|\ha_\thTa)}. \end{aligned}
MEM(q(.,.,..),a^t:T^a):=maxq(a^T^a+1:T^)  Iq(A^T^a+1:T^:S^T^a^t:T^a)=maxq(a^T^a+1:T^)  a^T^a+1:T^,s^T^q(a^T^a+1:T^)q(s^T^a^t:T^)logq(s^T^a^t:T^)q(s^T^a^t:T^a).\newcommand{\hT}{{\hat{T}}} \newcommand{\thT}{{t:\hT}} \newcommand{\hTa}{{\hat{T}_a}} \newcommand{\thTa}{{t:\hTa}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\d}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\I}{\text{I}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \newcommand{\hA}{\hat{A}} \begin{aligned} \mot^{EM}(\d(.,.,.|.),\ha_\thTa) :&= \max_{\d(\ha_{\hTa+1:\hT})} \; \I_{\d}(\hA_{\hTa+1:\hT}:\hS_\hT|\ha_\thTa) \\ &=\max_{\d(\ha_{\hTa+1:\hT})} \; \sum_{\ha_{\hTa+1:\hT},\hs_\hT} \d(\ha_{\hTa+1:\hT}) \d(\hs_\hT|\ha_\thT) \log \frac{\d(\hs_\hT|\ha_\thT)}{\d(\hs_\hT|\ha_\thTa)}. \end{aligned}

3. Some intrinsic motivations

4. Empowerment maximization

Actions should lead to control over as many future experiences as possible:

  • Actions \(\hat{a}_{t:\hat{T}_a}\) are taken such that subsequent actions \(\hat{a}_{\hat{T}_a+1:\hat{T}}\) have control
  • proposed by Klyubin and Polani (2005)

3. Some intrinsic motivations

4. Empowerment maximization

  • avoids random noise sources because they cannot be controlled
  • will not get stuck in known "dark room traps"
    • from $$\text{H}_{\text{q}}(\hat{S}_{\hat{T}}|\hat{a}_{t:\hat{T}_a})=0\Rightarrow\text{I}_{\text{q}}(\hat{A}_{\hat{T}_a+1:\hat{T}}\hat{S}_{\hat{T}}:|\hat{a}_{t:\hat{T}_a})=0$$
  • similar to energy and money maximization but more general
  • possible long term behavior:
    • remains in (or maintains) the situation where it expects the most control over future experience
    • exploration behavior not fully understood
    • Belief empowerment may solve it...

3. Some intrinsic motivations

4. Empowerment

Guckelsberger et al. (2016)

Guckelsberger, C., Salge, C., & Colton, S. (2016). Intrinsically Motivated General Companion NPCs via Coupled Empowerment Maximisation. 2016 IEEE Conf. Computational Intelligence in Games (CIG’16), 150–157

3. Some intrinsic motivations

5a. Curiosity

Actions should lead to surprising sensor values.

\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(\hS_\thT|\ha_\thT)\\ &= \sum_{\hs_\thT} \q(\hs_\thT|\ha_\thT)(- \log \q(\hs_\thT|\ha_\thT))\\ \end{aligned}
M(q(.,.,..,ξ),a^t:T^):=+Hq(S^t:T^a^t:T^)=s^t:T^q(s^t:T^a^t:T^)(logq(s^t:T^a^t:T^))\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(\hS_\thT|\ha_\thT)\\ &= \sum_{\hs_\thT} \q(\hs_\thT|\ha_\thT)(- \log \q(\hs_\thT|\ha_\thT))\\ \end{aligned}
  • Also called Shannon knowledge seeking (Orseau)
  • maximize expected surprise (=entropy)
  • Get density from the complete posterior.

3. Some intrinsic motivations

5a. Curiosity

  • will not get stuck in known "dark room traps"
    • it directly pursues the opposite situation
  • will get stuck at random noise sources
  • in deterministic environments not a problem
    • proven to asymptotically drive an agent to the behavior of an agent that knows the environment (under some technical conditions, see Orseau, 2011)

3. Some intrinsic motivations

5b. Curiosity

Actions should lead to surprising environment states.

  • Never seen this explicitly
  • But similar to following
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(\hE_\thT|\ha_\thT)\\ &= \sum_{\hs_\thT} \q(\he_\thT|\ha_\thT)(- \log \q(\he_\thT|\ha_\thT))\\ \end{aligned}
M(q(.,.,..,ξ),a^t:T^):=+Hq(E^t:T^a^t:T^)=s^t:T^q(e^t:T^a^t:T^)(logq(e^t:T^a^t:T^))\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(\hE_\thT|\ha_\thT)\\ &= \sum_{\hs_\thT} \q(\he_\thT|\ha_\thT)(- \log \q(\he_\thT|\ha_\thT))\\ \end{aligned}

3. Some intrinsic motivations

5b. Curiosity

Actions should lead to surprising embedding of sensor values:

  • Something between the last two
  • Gets stuck on random noise
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(f(\hS_\thT)|\ha_\thT)\\ &= \sum_{\hs_\thT} \q(f(\hs_\thT)|\ha_\thT)(- \log \q(f(\hs_\thT)|\ha_\thT))\\ \end{aligned}
M(q(.,.,..,ξ),a^t:T^):=+Hq(f(S^t:T^)a^t:T^)=s^t:T^q(f(s^t:T^)a^t:T^)(logq(f(s^t:T^)a^t:T^))\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(f(\hS_\thT)|\ha_\thT)\\ &= \sum_{\hs_\thT} \q(f(\hs_\thT)|\ha_\thT)(- \log \q(f(\hs_\thT)|\ha_\thT))\\ \end{aligned}

3. Some intrinsic motivations

5. Curiosity

Burda et al. (2018) with permission.

Burda, Y., Edwards, H., Pathak, D., Storkey, A., Darrell, T., and Efros, A. A. (2018). Large-Scale Study of Curiosity-Driven Learning.  arXiv:1808.04355 [cs, stat]. arXiv: 1808.04355.

3. Some intrinsic motivations

6. Novelty

Actions should lead to sensor values that haven't been visited often before.

  • Not seen this explicitly
  • But similar to Random Network Distillation
\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(\hE_\thT|\hS_\thT,\ha_\thT)\\ &= \sum_{\hs_\thT} \q(\he_\thT|\hs_\thT,\ha_\thT)(- \log \q(\he_\thT|\hs_\thT,\ha_\thT))\\ \end{aligned}
M(q(.,.,..,ξ),a^t:T^):=+Hq(E^t:T^S^t:T^,a^t:T^)=s^t:T^q(e^t:T^s^t:T^,a^t:T^)(logq(e^t:T^s^t:T^,a^t:T^))\newcommand{\hT}{\hat{T}} \newcommand{\thT}{{t:\hT}} \newcommand{\hs}{\hat{s}} \newcommand{\pt}{{\prec t}} \newcommand{\pet}{{\preceq t}} \newcommand{\set}{{\succeq t}} \newcommand{\ha}{\hat{a}} \newcommand{\he}{\hat{e}} \newcommand{\q}{\text{q}} \newcommand{\diff}{\text{d}} \newcommand{\ptau}{{\prec \tau}} \newcommand{\petau}{{\preceq \tau}} \newcommand{\stau}{{\succ \tau}} \newcommand{\setau}{{\succeq \tau}} \newcommand{\argmax}{\text{argmax}} \newcommand{\mot}{\mathfrak{M}} \newcommand{\HS}{\text{H}} \newcommand{\hS}{\hat{S}} \newcommand{\hE}{\hat{E}} \begin{aligned} \mot(\q(.,.,.|.,\xi),\ha_\thT) :&=+\HS_{\q}(\hE_\thT|\hS_\thT,\ha_\thT)\\ &= \sum_{\hs_\thT} \q(\he_\thT|\hs_\thT,\ha_\thT)(- \log \q(\he_\thT|\hs_\thT,\ha_\thT))\\ \end{aligned}

3. Some intrinsic motivations

6. Novelty

  • Noise in sensors does not affect \( \text{H}(\hat{E}_{t:\hat{T}}|\hat{S}_{t:\hat{T}},\hat{a}_{t:\hat{T}})\)
    • doesn't get stuck on random noise
  • Else tries to go where the model hasn't got enough experience to be certain about mapping to environment state.
    • Doesn't get stuck in dark room

3. Some intrinsic motivations

6. Novelty

Burda, Y., Edwards, H., Storkey, A. & Klimov, O. Exploration by Random Network Distillation. arXiv:1810.12894 [cs, stat] (2018).

References:

Aslanides, J., Leike, J., and Hutter, M. (2017). Universal Reinforcement Learning Algorithms: Survey and Experiments. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 1403–1410.


Ay, N., Bertschinger, N., Der, R., Güttler, F., and Olbrich, E. (2008). Predictive Information and Explorative Behavior of Autonomous Robots. The European Physical Journal B-Condensed Matter and Complex Systems, 63(3):329–339.

 

Burda, Y., Edwards, H., Pathak, D., Storkey, A., Darrell, T., and Efros, A. A. (2018). Large-Scale Study of Curiosity-Driven Learning.  arXiv:1808.04355 [cs, stat]. arXiv: 1808.04355.


Friston, K. J., Parr, T., and de Vries, B. (2017). The Graphical Brain: Belief Propagation and Active Inference. Network Neuroscience, 1(4):381–414.


Klyubin, A., Polani, D., and Nehaniv, C. (2005). Empowerment: A Universal Agent-Centric Measure of Control. In The 2005 IEEE Congress on Evolutionary Computation, 2005, volume 1, pages 128–135.


Orseau, L., Lattimore, T., and Hutter, M. (2013). Universal Knowledge-Seeking Agents for Stochastic Environments. In Jain, S., Munos, R., Stephan, F., and Zeugmann, T., editors, Algorithmic Learning Theory, number 8139 in Lecture Notes in Computer Science, pages 158–172. Springer Berlin Heidelberg.

 

Oudeyer, P.-Y. and Kaplan, F. (2008). How can we define intrinsic motivation? In Proceedings of the 8th International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, Lund University Cognitive Studies, Lund: LUCS, Brighton. Lund University Cognitive Studies, Lund: LUCS, Brighton.


Schmidhuber, J. (2010). Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247.


Storck, J., Hochreiter, S., and Schmidhuber, J. (1995). Reinforcement Driven Information Acquisition in Non-Deterministic Environments. In Proceedings of the International Conference on Artificial Neural Networks, volume 2, pages 159–164.

 

Guckelsberger, C., Salge, C., & Colton, S. (2016). Intrinsically Motivated General Companion NPCs via Coupled Empowerment Maximisation. 2016 IEEE Conf. Computational Intelligence in Games (CIG’16), 150–157

 

 

3. Some intrinsic motivations

Remarks:

  • here Occams razor / the preference of simplicity was hidden in prior
    • Algorithmic information theory formalizes the idea of complexity for strings and can be used for less probabilistic intrinsic motivations (see Schmidhuber, 2010)
  • Aslanides et al. (2017) have used knowledge seeking, Shannon knowledge seeking, and minimium description length improvement to augment AIXI
    • great testbed and running code: http://www.hutter1.net/aixijs/

Background on intrinsic motivations

AGI:

  • implement predictive model that continually improves through experience
  • implement action selection / optimization that chooses according to the prediction
  • drive it by open ended intrinsic motivation

Intrinsic motivations for Bayesian agents

By slides_martin

Intrinsic motivations for Bayesian agents

Slides for presentation at Kuniyoshi lab

  • 1,050