Visuomotor Policies
(via Behavior Cloning)

a prelude to RL and Control

MIT 6.4210/2:

Robotic Manipulation

Fall 2022, Lecture 17

Follow live at

(or later at

policy needs to know

state of the robot x state of the environment

The MIT Leg Lab Hopping Robots

What is a (dynamic) model?


..., u_{-1}, u_0, u_1, ...
..., y_{-1}, y_0, y_1, ...


Auto-regressive (eg. ARMAX)



p(\theta, x_0, w_0, w_1, ...)
x_{n+1} = f(n, x_n, u_n, w_n, \theta) \\ \quad y_n = g(n, x_n, u_n, w_n, \theta)
y_{n+1} = f(n, u_n, u_{n-1}, ..., \\ \qquad \qquad y_n, y_{n-1}, ..., \\ \qquad \qquad w_n, w_{n-1}, ..., \theta)





Levine*, Finn*, Darrel, Abbeel, JMLR 2016 

Visuomotor policies

Idea: Use small set of dense descriptors

Imitation learning setup

from hand-coded policies in sim

and teleop on the real robot

Standard "behavior-cloning" objective + data augmentation

Simulation experiments

"push box"

"flip box"

Policy is a small LSTM network (~100 LSTMs)

"And then … BC methods started to get good. Really good. So good that our best manipulation system today mostly uses BC, with a sprinkle of Q learning on top to perform high-level action selection. Today, less than 20% of our research investments is on RL, and the research runway for BC-based methods feels more robust."