or: "A thousand ways of learning"
Input: theory T
Output: sat (+ model) / unsat (+ proof; reason)
Search: incrementally refine partial structure α
Learn: extend T with φ s.t. T ⊨ φ
Propagation: derive ψ s.t. T ∪ α ⊨ ψ
and ψ forces refinement of α
Conflict: T ∪ α ⊨ ψ and T ∪ α ⊨ ¬ψ
Backjump: weaken α by analyzing φ
"Search language"
Typical search language is simple.
Learning allows to introduce complex concepts
as a set of simple formulas in the search state.
Input: CNF theory T
Search: assign true/false to literals
Learn: extend T with resolvent φ of clauses in T
Propagation: unit clauses ψ under α in T trigger refinement of α
Conflict: for each literal l in a conflict clause,
T ∪ α ⊨ ¬l
Backjump: undo literal assignments until 1UIP learned clause is no longer conflicting
Input: logic program T
Grounding: transform T into equivalent CNF T'
Search, Learn, Propagation, Conflict, Backjump:
as in CDCL
Input: rich set of constraints T
Flatten: transform T into equivalent "quantor-free" T'
Search: simple bounds on variables in T'
Propagation: unit propagation, backed by a clause c s.t. φ ⊨ c for some formula φ in T'
Conflict, Learn, Backjump: as in CDCL
Input: rich set of constraints T
Search: simple bounds on variables in T
Lazy grounding: transformation of φ into "quantor-free" φ' triggered by α
Propagation: unit propagation, backed by a clause c s.t. φ' ⊨ c
Conflict, Learning, Backjump: as in CDCL
Input: set of rich theories T1, T2, ... connected to T0 (CNF)
Search: Boolean decisions on T0
Grounding:
Propagation: theory-specific (e.g. Gaussian reasoning), but backed by a clause c s.t. Ti ⊨ c
Conflict, Learning, Backjump: as in CDCL
Input: linear inequalities (cuts) over ℚ and ℤ
objective function O
search: variable splitting tree,
tracking lower and upper bound wrt O
Learn special types of cuts:
given
s.t.
derive
Rounding of fractional solution
Is this "conflict learning"?
Input: T, symmetry group Σ of T
Propagation: triggers symmetric learning
Learn often, forget often!
Goal: combine search and CP proof
Not always possible to derive unsat cut from conflict:
Theory:
Decisions:
Propagations:
"Resolvent":
Linear theory is always convex!
3D example!
Solution: mix resolution and CP proof system
Learned cut:
Learned clause:
* learned clause is forgotten after backjump over its level
Goal: combine search and CP proof
Decide only extremal bounds!
Theory:
Decisions:
with bounded variables:
Same conflict...
Weaken and round propagators until they are "tight":
Weaken:
Resolve:
Round:
Graphical equivalent:
Graphical equivalent:
Optional: back to 3D
Theory:
Decisions:
with bounded 1-0 variables
<=>
Round-to-one:
Resolve:
<=>
Equivalent to CutSat?