Joint embedding of structure and features via graph convolutional networks
Sébastien Lerique, Jacobo Levy-Abitbol, Márton Karsai
IXXI, École Normale Supérieure de Lyon
a walk through
DeepWalk, LINE, node2vec, ...
Preserve different properties:
Easily answer questions like:
... be tightly connected
... relate through similar interests
... write in similar styles
graph node2vec: \(d_n(u_i, u_j)\)
average user word2vec: \(d_w(u_i, u_j)\)
Create a task-independent representation of network + features
What is the dependency between network structure and feature structure
Can we capture that dependency in a representation
network—feature dependencies
network—feature independence
Use deep learning to create embeddings
Network embeddings → Twitter → network-feature dependencies
Neural networks + Graph convolutions + Auto-encoders
by arranging the building blocks together
x
y
green
red
\(H^{(l+1)} = \sigma(H^{(l)}W^{(l)})\)
\(H^{(0)} = X\)
\(H^{(L)} = Z\)
Inspired by colah's blog
\(H^{(l+1)} = \sigma(H^{(l)}W^{(l)})\)
\(H^{(0)} = X\)
\(H^{(L)} = Z\)
\(H^{(l+1)} = \sigma(\color{DarkRed}{\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}}H^{(l)}W^{(l)})\)
\(H^{(0)} = X\)
\(H^{(L)} = Z\)
\(\color{DarkGreen}{\tilde{A} = A + I}\)
\(\color{DarkGreen}{\tilde{D}_{ii} = \sum_j \tilde{A}_{ij}}\)
Kipf & Welling (2016)
Four well-marked communities of size 10, small noise
Overlapping communities of size 12, small noise
Two feature communities in a near-clique, small noise
Five well-marked communities of size 20, moderate noise
From blog.keras.io
high dimension
high dimension
low dimension
MNIST Examples by Josef Steppan (CC-BY-SA 4.0)
60,000 training images
28x28 pixels
784 dims
784 dims
2D
From blog.keras.io
Socio-economic status
Language style
Topics
Socio-economic status
Language style
Topics
Compressed & combined representation of nodes + network
Kipf & Welling (2016)
Five well-marked communities of size 10, moderate label noise
Network embeddings → Twitter → network-feature dependencies
Neural networks + Graph convolutions + Auto-encoders
by arranging the building blocks together
Features finer than network communities
Features coarser than network communities
No correlation
Full correlation
Full overlap
Medium overlap
No overlap
Overlapping model
Reference model