April 1st 2024
Bernhard Paus Graesdal
ONR Autonomy Project: Pieter, Russ, Stephen, Zico
A method that blends discrete logic and continuous dynamics to leverage the rich contact dynamics
C. Chi et al., “Diffusion Policy: Visuomotor Policy Learning via Action Diffusion.” Mar. 09, 2023
N. Doshi et al., “Manipulation of unknown objects via contact configuration regulation.” Jun. 01, 2022
\(\iff\)
Note: The blue regions are not obstacles.
Mode class 1: Contact between the pusher and a face of the slider
Mode class 2: Non-contact
Motion planning in a contact mode can be formulated in the form:
where \(Q_i\) possibly indefinite, hence problem is nonconvex
Lift the problem:
\( x \in \R^n \rightarrow (x, X) \in \R^n \times \mathbb{S}^{n \times n}\)
Equivalent when \( \text{rank}(X) = 1 \iff X = x x^\intercal \)
Otherwise a convex relaxation
\( \longrightarrow \)
\( X := xx^\intercal \)
How to structure the graph and mode transitions?
\(\updownarrow\)
(4x is due to MIQP feedback controller)
(Reported values are mean values, with median in parenthesis)
(Reported values are mean values, with median in parenthesis)
Preprint available on Arxiv: https://arxiv.org/abs/2402.10312
Policy
Training
Data
Trajectory Generation
Example: Push-T Task (with Kuka)
Adam Wei