Attention fusion: An attention module assigns weights to each hop’s embedding.
Final embedding: A weighted sum of multi-hop embeddings forms the node representation.
Provably Powerful Graphs Networks(PPGNs)
Text
Text
Classical GNNs cannot capture higher-order structures (like cliques, cycles, regularity) that go beyond local neighborhoods.
They are designed to be at least as powerful as the k-dimensional Weisfeiler–Lehman test (k-WL) with k≥3k \geq 3k≥3.
This means they can distinguish graphs that 1-WL (and hence standard GNNs) fail to separate.
In practice, they can capture higher-order interactions like:
Detecting whether a graph is regular
Recognizing graph symmetries
Counting small substructures (motifs, cliques, cycles)
Provably Powerful Graphs Networks(PPGNs)
Text
Text
Classical GNNs cannot capture higher-order structures (like cliques, cycles, regularity) that go beyond local neighborhoods.
Provably Powerful Graphs Networks(PPGNs)
Text
Text
Classical GNNs cannot capture higher-order structures (like cliques, cycles, regularity) that go beyond local neighborhoods.
K-Hop GNN vs Multi-Hop Attention GNN vs PPGN
Text
Text
k-hop GNN (left): gathers all nodes within k-hops (green) of the target node (orange).
Multi-Hop Attention GNN (middle): still considers multi-hop neighbors, but pays different levels of attention (big green node = more important, small green = less important).
Provably Powerful GNN (right): doesn’t just aggregate neighbors — it also considers higher-order relationships (red arrows show interactions between pairs of nodes).