MIT 6.800/6.843:
Robotic Manipulation
Fall 2021, Lecture 5
Follow live at https://slides.com/d/WiwXxtM/live
(or later at https://slides.com/russtedrake/fall21-lec05)
So far, we have assumed a "perception oracle" that could tell us \({}^WX^O\).
Stop using "cheat ports"
Use the cameras instead!
Velodyne spinning lidar
Hokoyu scanning lidar
Luminar
(500m range)
Carnegie multisense stereo
Point Grey Bumblebee
Microsoft Kinect
Asus Xtion
https://arxiv.org/abs/1505.05459
Microsoft Kinect v2
https://en.wikipedia.org/wiki/Time-of-flight_camera
Samsung Galaxy DepthVision
Intel Realsense D415
Our pick for the "Manipulation Station"
Major advantage over e.g. ToF: multiple cameras don't interfere with each other.
(also iPhone TrueDepth)
from the docs: "Each pixel in the output image from depth_image is a 16bit unsigned short in millimeters."
two on wrist
figure from Chris Sweeney et al. ICRA 2019.
By russtedrake
MIT Robotic Manipulation Fall 2020 http://manipulation.csail.mit.edu
Roboticist at MIT and TRI