Short Talk RLG
October 23rd 2023
Bernhard Paus Graesdal
Disclaimer: Only worked on this for two weeks! Very preliminary
1. Solve large GCS instances
2. Leverage parallel computation
Great reference on ADMM:
[1] S. Boyd, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” FNT in Machine Learning, 2010
3. Convergence Check: Stop the iterations if the stopping criteria are satisfied.
2. Iteration: For \(k = 0, 1, 2, \ldots\) until convergence, update:
Update steps now take the form:
Update steps now take the form:
Two paths:
\( \longrightarrow \)
Original
With local copies
\( \longrightarrow \)
"Consensus constraints"
\( \leftarrow \)
Cost now decomposes independently over edges
\( \leftarrow \)
We now have three blocks of variables:
The problem is bi-convex in these
3. Discrete update: Reduces to a discrete SPP (solve with Dijkstra)
4. Dual update:
Original
With local copies
\( \longrightarrow \)
We now have two blocks of variables:
3. Dual update:
2. Consensus update (on edges): For each edge, compute the consensus variables as "the mean" over the local edge variables:
Green = GCS solution
Red = ADMM solution