Intuitive Physics

Part 2: Learning state representations

MIT 6.4210/2:

Robotic Manipulation

Fall 2022, Lecture 20

Follow live at

(or later at

Learning Dynamics with a Graph Neural Network

Scene (evaluated on a grid)

Dense adjacency matrix:

Sparse adjacency matrix:

work by Yunzhu Li

Macklin, Muller, Chentanez, Kim. ACM TOG 33(4)

Some simulators use particles (not rigid bodies)


Model is a graph-neural network (GNN), with adjacency based on location, but also object type

Ground truth

Model roll-out

Ground truth

Model roll-out

Ground truth

Model roll-out

Planning with learned models

A few closing thoughts:

  • Despite neural nets being differentiable, people often plan with black-box solvers (CEM or MPPI)
    • can be powerful on the GPU
    • combine local search with global optimization
  • Typically restricted to relatively short planning horizons


  • Neural models tend to be extremely good near the training data (interpolation), but not away from the data (extrapolation)
  • Often requires heuristics / costs to stay close to the training data

Lecture 20: Intuitive Physics (part 2)

By russtedrake

Lecture 20: Intuitive Physics (part 2)

MIT Robotic Manipulation Fall 2022

  • 815