Adapted Solvers For Graphs Of Convex Sets

Alexandre Amice

Amazon Fall 2025

Motivation: GCS is Very Flexible

Object Rearrangement

Motivation: GCS is Very Flexible

Object Rearrangement

Dynamic Motion Planning

Motivation: GCS is Very Flexible

Object Rearrangement

Dynamic Motion Planning

Contact Rich Manipulation

Motivation: GCS is Very Flexible

Object Rearrangement

Dynamic Motion Planning

More Complicated Graphs

More Complicated Sets

Contact Rich Manipulation

Complexity of Solving Methodology

A GCS with:

  1. n vertices
  2. Each vertex has m edges
  3. Each set has dimension d
\mathcal{O}(\text{poly}(nmd))

Current State of the Art

Solver Wishlist

A GCS with:

  1. n vertices
  2. Each vertex has m edges
  3. Each set has dimension d
\mathcal{O}(n(\text{poly}(m)+\text{poly}(d))

Target Complexity

Solver Wishlist

Target Complexity

Also Want

  1. Advances in graph algorithms should transfer
  2. Advances in convex optimization solvers should transer
  3. Naturally leverage HPC techniques like GPUs/Clusters.
\mathcal{O}(n(\text{poly}(m)+\text{poly}(d))

Solve GCS at Any Scale

Object Rearrangement

Dynamic Motion Planning

More Complicated Graphs

More Complicated Sets

Contact Rich Manipulation

The Journey Of A GCS Problem

Problem

Model

The Journey Of A GCS Problem

Model

\begin{aligned} \min~& \sum_{e \in \mathcal{E}}y_{e}f_{e}(x_{u},x_{v})\\ \text{subject to }& y_{v}x_{v} \in y_{v}\mathcal{X}_{v} ~\forall v \in \mathcal{V} \\ & y_{e}(x_{u}, x_{v}) \in y_{e}\mathcal{X}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & y \text{ encodes a path },~~ y\in \{0,1\} \end{aligned}

The Journey Of A GCS Problem

Model

\begin{aligned} \min~& \sum_{e \in \mathcal{E}}y_{e}f_{e}(x_{u},x_{v})\\ \text{subject to }& y_{v}x_{v} \in y_{v}\mathcal{X}_{v} ~\forall v \in \mathcal{V} \\ & y_{e}(x_{u}, x_{v}) \in y_{e}\mathcal{X}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & y \text{ encodes a path },~~ y\in \{0,1\} \end{aligned}
\begin{aligned} \min~& \sum_{e \in \mathcal{E}}c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to }& A_{v}z_{v}-b_{v} y_{v} \in \mathcal{K}_{v} ~\forall v \in \mathcal{V} \\ & A_{e}(z_{e_{u}},z_{e_{v}}) - b_{e}y_{e} \in \mathcal{K}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0,~~ y\in \{0,1\}\ \end{aligned}

Standardize

The Journey Of A GCS Problem

Model

\begin{aligned} \min~& \sum_{e \in \mathcal{E}}y_{e}f_{e}(x_{u},x_{v})\\ \text{subject to }& y_{v}x_{v} \in y_{v}\mathcal{X}_{v} ~\forall v \in \mathcal{V} \\ & y_{e}(x_{u}, x_{v}) \in y_{e}\mathcal{X}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & y \text{ encodes a path },~~ y\in \{0,1\} \end{aligned}
\begin{aligned} \min~& \sum_{e \in \mathcal{E}}c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to }& A_{v}z_{v}-b_{v} y_{v} \in \mathcal{K}_{v} ~\forall v \in \mathcal{V} \\ & A_{e}(z_{e_{u}},z_{e_{v}}) - b_{e}y_{e} \in \mathcal{K}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0,~~ y\in \{0,1\}\ \end{aligned}

Standardize

Form Relaxation

The Journey Of A GCS Problem

Model

\begin{aligned} \min~& \sum_{e \in \mathcal{E}}y_{e}f_{e}(x_{u},x_{v})\\ \text{subject to }& y_{v}x_{v} \in y_{v}\mathcal{X}_{v} ~\forall v \in \mathcal{V} \\ & y_{e}(x_{u}, x_{v}) \in y_{e}\mathcal{X}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & y \text{ encodes a path },~~ y\in \{0,1\} \end{aligned}
\begin{aligned} \min~& \sum_{e \in \mathcal{E}}c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to }& A_{v}z_{v}-b_{v} y_{v} \in \mathcal{K}_{v} ~\forall v \in \mathcal{V} \\ & A_{e}(z_{e_{u}},z_{e_{v}}) - b_{e}y_{e} \in \mathcal{K}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0,~~ y\in \{0,1\}\ \end{aligned}

Standardize

Form Relaxation

\begin{aligned} \min_{\lambda}~& g^{T}\lambda \\ \text{subject to }~& T\lambda - s \in \mathcal{K} \end{aligned}

Parse To Conic Form

Solve

The Journey Of A GCS Problem

Form Relaxation

\begin{aligned} \min_{\lambda}~& g^{T}\lambda \\ \text{subject to }~& T\lambda - s \in \mathcal{K} \end{aligned}

Parse To Conic Form

Destroys Structure

 

\begin{aligned} \min~& \sum_{e \in \mathcal{E}}c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to }& A_{v}z_{v}-b_{v} y_{v} \in \mathcal{K}_{v} ~\forall v \in \mathcal{V} \\ & A_{e}(z_{e_{u}},z_{e_{v}}) - b_{e}y_{e} \in \mathcal{K}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0,~~ y\in \{0,1\}\ \end{aligned}

Form Relaxation

The Structure of The Relaxation

The Structure of The Relaxation

\begin{aligned} \min~& \sum_{e \in \mathcal{E}}c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to }& A_{v}z_{v}-b_{v} y_{v} \in \mathcal{K}_{v} ~\forall v \in \mathcal{V} \\ & A_{e}(z_{e_{u}},z_{e_{v}}) - b_{e}y_{e} \in \mathcal{K}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0,~~ y\in \{0,1\}\ \end{aligned}
\begin{gathered} \min \sum_{e\in\mathcal{E}} c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to} \begin{bmatrix} A_{v} & -b_{v} \end{bmatrix} \begin{bmatrix} z_{v} & z_{e_{1}} & \dots & z_{e_{m}} & x_{v} \\ y_{v} & y_{e_{1}} & \dots & y_{e_{m}} & 1 \end{bmatrix} \begin{bmatrix} C_{v}^{T} \\ -d_{v}^{T} \end{bmatrix} \in \underbrace{ \begin{bmatrix} \mathcal{K}_{v} & \dots & \mathcal{K}_{v} \end{bmatrix} }_{\text{number of incident edges}} \\ \begin{bmatrix} A_{e} & -b_{e} \end{bmatrix} \begin{bmatrix} z_{e_{u}} \\ z_{e_{v}} \\ y_{e} \end{bmatrix} \in \mathcal{K}_{e}, ~\forall e \in \mathcal{E} \\ A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0 \end{gathered}

The Structure of The Relaxation

\begin{gathered} \min \sum_{e\in\mathcal{E}} c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to} \begin{bmatrix} A_{v} & -b_{v} \end{bmatrix} \begin{bmatrix} z_{v} & z_{e_{1}} & \dots & z_{e_{m}} & x_{v} \\ y_{v} & y_{e_{1}} & \dots & y_{e_{m}} & 1 \end{bmatrix} \begin{bmatrix} C_{v}^{T} \\ -d_{v}^{T} \end{bmatrix} \in \underbrace{ \begin{bmatrix} \mathcal{K}_{v} & \dots & \mathcal{K}_{v} \end{bmatrix} }_{\text{number of incident edges}} \\ \begin{bmatrix} A_{e} & -b_{e} \end{bmatrix} \begin{bmatrix} z_{e_{u}} \\ z_{e_{v}} \\ y_{e} \end{bmatrix} \in \mathcal{K}_{e}, ~\forall e \in \mathcal{E} \\ A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0 \end{gathered}

The relaxation keeps the graph and convex sets factored

The Structure of The Relaxation

Relaxation

Destroys Structure

\begin{aligned} \min_{\lambda}~& g^{T}\lambda \\ \text{subject to }~& T\lambda - s \in \mathcal{K} \end{aligned}
  1. Flattens a sparse, matrix optimization into a vector optimization.

  2. Vector optimization is denser, more irregularly patterned, and worse conditioned.

  3. Vector optimization obfuscates the role of the graph vs the convex sets

Lower Bound Work Per Iteration

Relaxation

Destroys Structure

\begin{aligned} \min_{\lambda}~& g^{T}\lambda \\ \text{subject to }~& T\lambda - s \in \mathcal{K} \end{aligned}

A GCS with:

  1. n vertices
  2. Each vertex has m edges
  3. Each set has dimension d

Lower Bound

\mathcal{O}(n^{3}m^{3}d^{3})

Lower Bound

\mathcal{O}(n(m^{3}+d^{3}))

An ADMM Based Solver For Gcs

Alternating Direction Method of Multipliers

\begin{aligned} \min~& \sum_{i}f_{i}(x_{i}) + \sum_{j}g_{j}(z_{j}) \\ \text{subject to}~& x_{i} = z_{j} ~\text{for some pairs} (i,j) \end{aligned}

A simple algorithm for solving problems of the form

Alternating Direction Method of Multipliers

\begin{aligned} \min~& \sum_{i}f_{i}(x_{i}) + \sum_{j}g_{j}(z_{j}) \\ \text{subject to}~& x_{i} = z_{j} ~\text{for some pairs} (i,j) \end{aligned}
x_{i}^{k+1} \gets \min_{x_{i}} f_{i}(x_{i}) + \sum_{(i,j)} \frac{\rho}{2}||x_{i} - z_{j}^{k} + \mu_{ij}^{k}||^{2}
z_{j}^{k+1} \gets \min_{z_{j}} g_{j}(z_{j}) + \sum_{(i,j)} \frac{\rho}{2}||x_{i}^{k+1} - z_{j} + \mu_{ij}^{k}||^{2}
\mu_{ij}^{k+1} \gets \mu_{ij}^{k} + (x_{i}^{k+1} - z_{j}^{k+1})

Algorithm

1.

2.

3.

Alternating Direction Method of Multipliers

\begin{aligned} \min~& \sum_{i}f_{i}(x_{i}) + \sum_{j}g_{j}(z_{j}) \\ \text{subject to}~& x_{i} = z_{j} ~\text{for some pairs} (i,j) \end{aligned}
\begin{gather*} \min \sum_{e\in\mathcal{E}} c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to} \begin{bmatrix} A_{v} & -b_{v} \end{bmatrix} \begin{bmatrix} z_{v} & z_{e_{1}} & \dots & z_{e_{m}} & x_{v} \\ y_{v} & y_{e_{1}} & \dots & y_{e_{m}} & 1 \end{bmatrix} \begin{bmatrix} C_{v}^{T} \\ -d_{v}^{T} \end{bmatrix} \in \\ \underbrace{ \begin{bmatrix} \mathcal{K}_{v} & \dots & \mathcal{K}_{v} \end{bmatrix} }_{\text{number of incident edges}} \\ \begin{bmatrix} A_{e} & -b_{e} \end{bmatrix} \begin{bmatrix} z_{e_{u}} \\ z_{e_{v}} \\ y_{e} \end{bmatrix} \in \mathcal{K}_{e}, ~\forall e \in \mathcal{E} \\ A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0 \end{gather*}

Alternating Direction Method of Multipliers

\begin{gather*} \min \sum_{e\in\mathcal{E}} c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to} \begin{bmatrix} A_{v} & -b_{v} \end{bmatrix} \begin{bmatrix} z_{v} & z_{e_{1}} & \dots & z_{e_{m}} & x_{v} \\ y_{v} & y_{e_{1}} & \dots & y_{e_{m}} & 1 \end{bmatrix} \begin{bmatrix} C_{v}^{T} \\ -d_{v}^{T} \end{bmatrix} \in \\ \underbrace{ \begin{bmatrix} \mathcal{K}_{v} & \dots & \mathcal{K}_{v} \end{bmatrix} }_{\text{number of incident edges}},~\forall v \in \mathcal{V} \\ \begin{bmatrix} A_{e} & -b_{e} \end{bmatrix} \begin{bmatrix} z_{e_{u}} \\ z_{e_{v}} \\ y_{e} \end{bmatrix} \in \mathcal{K}_{e}, ~\forall e \in \mathcal{E} \\ A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0 \end{gather*}

Alternating Direction Method of Multipliers

\begin{gather*} \min {\color{red} \sum_{e\in\mathcal{E}} c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}^{e}) } \\ \text{subject to} {\color{blue} \begin{bmatrix} A_{v} & -b_{v} \end{bmatrix} \begin{bmatrix} z_{v} & z_{e_{1}} & \dots & z_{e_{m}} & x_{v} \\ y_{v}^{v} & y_{e_{1}}^{v} & \dots & y_{e_{m}}^{v} & 1 \end{bmatrix} \begin{bmatrix} C_{v}^{T} \\ -d_{v}^{T} \end{bmatrix} \in} \\ {\color{blue} \underbrace{ \begin{bmatrix} \mathcal{K}_{v} & \dots & \mathcal{K}_{v} \end{bmatrix} }_{\text{number of incident edges}},~\forall v \in \mathcal{V} } \\ {\color{red} \begin{bmatrix} A_{e} & -b_{e} \end{bmatrix} \begin{bmatrix} z_{e_{u}} \\ z_{e_{v}} \\ y_{e}^{e} \end{bmatrix} \in \mathcal{K}_{e}, ~\forall e \in \mathcal{E}} \\ {\color{orange}A_{\mathcal{P}}{y}-b_{\mathcal{P}} \geq 0} \\ {\color{blue}y_{v}^{v}} = {\color{orange}y_{v}},~ {\color{blue}y_{e}^{v}} = {\color{orange}y_{e}},~ \end{gather*}

Separates the graph problem from the convex problem.

Alternating Direction Method of Multipliers

\begin{gather*} \min {\color{red} \sum_{e\in\mathcal{E}} c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}^{e}) } \\ \text{subject to} {\color{blue} \begin{bmatrix} A_{v} & -b_{v} \end{bmatrix} \begin{bmatrix} z_{v} & z_{e_{1}}^{v} & \dots & z_{e_{m}}^{v} & x_{v} \\ y_{v}^{v} & y_{e_{1}}^{v} & \dots & y_{e_{m}}^{v} & 1 \end{bmatrix} \begin{bmatrix} C_{v}^{T} \\ -d_{v}^{T} \end{bmatrix} \in} \\ {\color{blue} \underbrace{ \begin{bmatrix} \mathcal{K}_{v} & \dots & \mathcal{K}_{v} \end{bmatrix} }_{\text{number of incident edges}},~\forall v \in \mathcal{V} } \\ {\color{red} \begin{bmatrix} A_{e} & -b_{e} \end{bmatrix} \begin{bmatrix} z_{e_{u}}^{e} \\ z_{e_{v}}^{e} \\ y_{e}^{e} \end{bmatrix} \in \mathcal{K}_{e}, ~\forall e \in \mathcal{E}} \\ {\color{orange}A_{\mathcal{P}}{y}-b_{\mathcal{P}} \geq 0} \\ {\color{blue}y_{v}^{v}} = {\color{orange}y_{v}},~ {\color{blue}y_{e}^{v}} = {\color{orange}y_{e}},~ \\ {\color{blue}z_{e_{v}}^{v}} = {\color{red}z_{e_{v}^{e}}} \end{gather*}

Separates the vertices from the edges

Alternating Direction Method of Multipliers

\begin{gather*} \min {\color{red} \sum_{e\in\mathcal{E}} c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}^{e}) } \\ \text{subject to} {\color{blue} \begin{bmatrix} A_{v} & -b_{v} \end{bmatrix} \begin{bmatrix} z_{v} & z_{e_{1}}^{v} & \dots & z_{e_{m}}^{v} & x_{v} \\ y_{v}^{v} & y_{e_{1}}^{v} & \dots & y_{e_{m}}^{v} & 1 \end{bmatrix} \begin{bmatrix} C_{v}^{T} \\ -d_{v}^{T} \end{bmatrix} \in} \\ {\color{blue} \underbrace{ \begin{bmatrix} \mathcal{K}_{v} & \dots & \mathcal{K}_{v} \end{bmatrix} }_{\text{number of incident edges}},~\forall v \in \mathcal{V} } \\ {\color{red} \begin{bmatrix} A_{e} & -b_{e} \end{bmatrix} \begin{bmatrix} z_{e_{u}}^{e} \\ z_{e_{v}}^{e} \\ y_{e}^{e} \end{bmatrix} \in \mathcal{K}_{e}, ~\forall e \in \mathcal{E}} \\ {\color{orange}A_{\mathcal{P}}{y}-b_{\mathcal{P}} \geq 0} \\ {\color{blue}y_{v}^{v}} = {\color{orange}y_{v}},~ {\color{blue}y_{e}^{v}} = {\color{orange}y_{e}},~ \\ {\color{blue}z_{e_{v}}^{v}} = {\color{red}z_{e_{v}}^{e}} \end{gather*}
x_{i}^{k+1} \gets \min_{x_{i}} f_{i}(x_{i}) + \sum_{(i,j)} \frac{\rho}{2}||x_{i} - z_{j}^{k} + \mu_{ij}^{k}||^{2}
z_{j}^{k+1} \gets \min_{z_{j}} g_{j}(z_{j}) + \sum_{(i,j)} \frac{\rho}{2}||x_{i}^{k+1} - z_{j} + \mu_{ij}^{k}||^{2}
\mu_{ij}^{k+1} \gets \mu_{ij}^{k} + (x_{i}^{k+1} - z_{j}^{k+1})

Algorithm

1.

2.

3.

1. Solve in parallel a conic program per vertex

2a. Solve in parallel conic program per edge

2b. Solve a graph problem

Alternating Direction Method of Multipliers

\mu_{ij}^{k+1} \gets \mu_{ij}^{k} + (x_{i}^{k+1} - z_{j}^{k+1})

Algorithm

3.

1. Solve in parallel a conic program per vertex

2a. Solve in parallel a conic program per edge

2b. Solve a graph problem

Wish List

  1. Advances in graph algorithms should transfer
  2. Advances in convex optimization solvers should transer
  3. Naturally leverage HPC techniques like GPUs/Clusters.

Alternating Direction Method of Multipliers

\mu_{ij}^{k+1} \gets \mu_{ij}^{k} + (x_{i}^{k+1} - z_{j}^{k+1})

Algorithm

3.

1. Solve in parallel a conic program per vertex

2a. Solve in parallel a conic program per edge

2b. Solve a graph problem

\mathcal{O}(n\text{poly}(md))

Work per iteration

  1. n vertices
  2. Each vertex has m edges
  3. Each set has dimension d

A GCS with

Requires

Alternating Direction Method of Multipliers

\mu_{ij}^{k+1} \gets \mu_{ij}^{k} + (x_{i}^{k+1} - z_{j}^{k+1})

Algorithm

3.

1. Solve in parallel a conic program per vertex

2a. Solve in parallel a conic program per edge

2b. Solve a graph problem

  1. No need to solve steps 1 and 2 to optimality.
    • Introduces a computation/communication trade-off
    • Extreme cases can lead to closed form updates.
  2. Naturally supports branch-and-bound cuts

Additional Features

Alternating Direction Method of Multipliers

\mu_{ij}^{k+1} \gets \mu_{ij}^{k} + (x_{i}^{k+1} - z_{j}^{k+1})

Algorithm

3.

1. Solve in parallel a conic program per vertex

2a. Solve in parallel a conic program per edge

\mathcal{O}(n(m^{3}+d^{3}))

Work per iteration

One Particularly Efficient Choice of Splitting

Requires

2b. Solve a graph problem

Runtime Per Iteration

\mathcal{O}(n(m^{3}+d^{3})/k)

Ours

  1. n vertices
  2. Each vertex has m edges
  3. Each set has dimension d
  4. With k processors

A GCS with

\mathcal{O}(n^{3}m^{3}d^{3})

Scs/Cosmo Solver

Early Results On Mazes of Increasing Sizes

Early Results On Mazes of Increasing Sizes

Solver Coming Soon

With GPU results...

An Interior Point Based Solver For Gcs

Similar Lessons Carry Through from ADMM to Interior Point

More soon

\begin{aligned} \min~& \sum_{e \in \mathcal{E}}c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to }& A_{v}z_{v}-b_{v} y_{v} \in \mathcal{K}_{v} ~\forall v \in \mathcal{V} \\ & A_{e}(z_{e_{u}},z_{e_{v}}) - b_{e}y_{e} \in \mathcal{K}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0,~~ y\in \{0,1\}\ \end{aligned}

Form Relaxation

The Structure of The Relaxation

The Structure of The Relaxation

\begin{gathered} \min~& \text{tr}\left(C\begin{bmatrix} Z & X\\ y & 1 \end{bmatrix}\right) \\ \text{subject to}~& \left[ \begin{array}{ccc|c} A_{v_{1}} && & -b_{v_{1}}\\ &\ddots& & \vdots\\ &&A_{v_{n}} & -b_{v_{n}} \\ \hline A_{e_{1}}^{u} & \dots & A_{e_{1}}^{v} & -b_{e_{1}} \\ \vdots & \ddots & \vdots & \vdots\\ \dots & A_{e_{m}}^{u} & A_{e_{m}}^{v} & -b_{e_{m}} \\ \hline 0 & \dots & 0 & 1 \end{array} \right] \begin{bmatrix} Z & X\\ y & 1 \end{bmatrix} \begin{bmatrix} A_{\mathcal{P}}^{T}\\ -b_{\mathcal{P}}^{T} \end{bmatrix} \in \begin{bmatrix} \mathcal{K}_{v_{1}} & \dots & \mathcal{K}_{v_{1}} \\ \vdots & \ddots & \vdots \\ \mathcal{K}_{v_{n}} & \dots & \mathcal{K}_{v_{n}} \\ \hline \mathcal{K}_{e_{1}}& \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \mathcal{K}_{e_{m}} \\ \hline \mathbb{R}_{+} & \dots & \mathbb{R}_{+} \end{bmatrix} \end{gathered}
\begin{aligned} \min~& \sum_{e \in \mathcal{E}}c_{e}^{T}(z_{e_{u}},z_{e_{v}}, y_{e}) \\ \text{subject to }& A_{v}z_{v}-b_{v} y_{v} \in \mathcal{K}_{v} ~\forall v \in \mathcal{V} \\ & A_{e}(z_{e_{u}},z_{e_{v}}) - b_{e}y_{e} \in \mathcal{K}_{e} ~ \forall (u,v) \in \mathcal{E} \\ & A_{\mathcal{P}}y-b_{\mathcal{P}} \geq 0,~~ y\in \{0,1\}\ \end{aligned}

The Structure of The Relaxation

\begin{gathered} \min~& \text{tr}\left(C\begin{bmatrix} Z & X\\ y & 1 \end{bmatrix}\right) \\ \text{subject to}~& \left[ \begin{array}{ccc|c} A_{v_{1}} && & -b_{v_{1}}\\ &\ddots& & \vdots\\ &&A_{v_{n}} & -b_{v_{n}} \\ \hline A_{e_{1}}^{u} & \dots & A_{e_{1}}^{v} & -b_{e_{1}} \\ \vdots & \ddots & \vdots & \vdots\\ \dots & A_{e_{m}}^{u} & A_{e_{m}}^{v} & -b_{e_{m}} \\ \hline 0 & \dots & 0 & 1 \end{array} \right] \begin{bmatrix} Z & X\\ y & 1 \end{bmatrix} \begin{bmatrix} A_{\mathcal{P}}^{T}\\ -b_{\mathcal{P}}^{T} \end{bmatrix} \in \begin{bmatrix} \mathcal{K}_{v_{1}} & \dots & \mathcal{K}_{v_{1}} \\ \vdots & \ddots & \vdots \\ \mathcal{K}_{v_{n}} & \dots & \mathcal{K}_{v_{n}} \\ \hline \mathcal{K}_{e_{1}}& \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \mathcal{K}_{e_{m}} \\ \hline \mathbb{R}_{+} & \dots & \mathbb{R}_{+} \end{bmatrix} \end{gathered}
\begin{gathered} \min~& \text{tr}\left(C\begin{bmatrix} Z & X\\ y & 1 \end{bmatrix}\right) \\ \text{subject to}~& \left[ \begin{array}{c|c} \text{block diagonal} & \text{dense} \\ \hline \text{block adjacency} & \text{vector} \\ \hline 0 & 1 \end{array} \right] \begin{bmatrix} Z & X\\ y & 1 \end{bmatrix} \begin{bmatrix} \text{Shortest Path} \\ \text{Problem} \end{bmatrix} \in \begin{bmatrix} \text{Vertex-wise} \\ \text{cones} \\ \hline \text{Diagonal edge-wise} \\ \text{cones} \\ \hline \mathbb{R}_{+} \end{bmatrix} \end{gathered}

The Structure of The Relaxation

Form Relaxation

Destroys Structure

\begin{aligned} \min_{\lambda}~& g^{T}\lambda \\ \text{subject to }~& T\lambda - s \in \mathcal{K} \end{aligned}
  1. Flattens a sparse, matrix optimization into a vector optimization.

  2. Vector optimization is denser, more irregularly patterned, and worse conditioned.

  3. Vector optimization obfuscates the role of the graph vs the convex sets

Graph Of Convex Sets Final

By Alexandre Amice

Graph Of Convex Sets Final

  • 18