Summarized node feature matrix.
A simple example graph containing 4 nodes
An undirected graph without self-loop
with its representing adjacency matrix (symmetric)
A directed graph with self-loop
with its representing adjacency matrix (unsymmetric)
GNNs rely on message passing methods, which means that vertices exchange information with the neighbors, and send "messages" to each other.
Message passing rules describe how node embeddings are learned. A generalized abstract GNN model can be defined as:
A simply visualization of the message passing process of GNNs
 Kipf et al., "Semi-supervised Classification with Graph Convolutional Networks", (ICLR-2017)
 Willianm et al.,"Inductive Representation Learning on Large Graphs", (NeurIPS-2017)
 Xu, Keyulu, et al. "How powerful are graph neural networks?." (ICLR-2019).
 Petar et al., "Graph Attention Networks",(ICLR-2018)
 Rex Ying et al., "Hierarchical Graph Representation Learning with Differentiable Pooling", (NeurIPS-2018)
Based on aggregation and update functions
Based on tasks
Graph Convolutional Networks (GCNs) have been introduced by Kipf et al. in 2016 at the University of Amsterdam.
GCN implements "message-passing" functions in the graph by a combination of linear transformations over one-hop neighbourhoods and non-linearities as defined:
Liu, Qinghui; Kampffmeyer, Michael; Jenssen, Robert; Salberg, Arnt Børre. "Self-constructing graph neural networks to model long-range pixel dependencies for semantic segmentation of remote sensing images." International Journal of Remote Sensing, vol 42.16, pp 6184-6208, doi:10.1080/01431161.2021.1936267, 2021.
Thanks for your attention!
Brian Liu @CRAI 23/9/2021