russtedrake PRO
Roboticist at MIT and TRI
MIT 6.421:
Robotic Manipulation
Fall 2023, Lecture 6
Follow live at https://slides.com/d/2LIEAE8/live
(or later at https://slides.com/russtedrake/fall23-lec06)
So far, we have assumed a "perception oracle" that could tell us \({}^WX^O\).
Stop using "cheat ports"
Use the cameras instead!
Velodyne spinning lidar
Hokoyu scanning lidar
Luminar
(500m range)
Carnegie multisense stereo
Point Grey Bumblebee
Microsoft Kinect
Asus Xtion
https://arxiv.org/abs/1505.05459
Microsoft Kinect v2
Samsung Galaxy DepthVision
Intel Realsense D415
Our pick for the "Manipulation Station"
Major advantage over e.g. ToF: multiple cameras don't interfere with each other.
(also iPhone TrueDepth)
from the docs: "Each pixel in the output image from depth_image is a 16bit unsigned short in millimeters."
directives:
- add_model:
name: mustard
file: package://drake/manipulation/models/ycb/sdf/006_mustard_bottle.sdf
- add_weld:
parent: world
child: mustard::base_link_mustard
X_PC:
translation: [0, 0, 0.09515]
rotation: !Rpy { deg: [-90, 0, -90]}
- add_model:
name: camera
file: package://manipulation/camera_box.sdf
- add_weld:
parent: world
child: camera::base
X_PC:
translation: [0.5, 0.1, 0.2]
# Point slightly down towards camera
# RollPitchYaw(0, -0.2, 0.2) @ RollPitchYaw(-np.pi/2, 0, np.pi/2)
rotation: !Rpy { deg: [-100, 0, 100] }
cameras:
main_camera:
name: camera0
X_PB:
base_frame: camera::base
Add cameras
MultibodyPlant
SceneGraph
RgbdSensor
DepthImageToPointCloud
MeshcatPointCloudVisualizer
two on wrist
figure from Chris Sweeney et al. ICRA 2019.
By russtedrake
MIT Robotic Manipulation Fall 2023 http://manipulation.mit.edu