Experiments
Last updated
We compare our proposed algorithm with some of the GNN variants who have the fixed finite number of propagations T.
Inductive Learning: will not be used during training.
Transductive Learning: will be used during training.
more: https://www.wikiwand.com/en/Transduction_(machine_learning)
Task: identify the component ID for a certain node.
training: 10% nodes with labels
testing : the rest
Transductive setting:
training: varying the training set size from 10%-90% of the total nodes
testing : reserve 10% nodes for held-out evaluation
SSE
S2V-degree
GCN
structure2vec
Transductive setting:
split the nodes equally into training and test set
Inductive setting:
the training is performed in a single graph, while the algorithm is asked to generalize to new graphs from the same distribution
Task: learn the mean-field score for each node over a 128*128 lattice with set to be binary with a Gaussian perturbation.
(Figure 3c)
efficient for large-scale graphs in terms of both convergence speed and execution time
faster than others
we presented SSE, an algorithm that can learn many steady-state algorithms over graphs.
Different from graph neural network family models, SSE is trained stochastically which only requires 1-hop information, but can capture fixed point relationships efficiently and effectively.
We demonstrate this in both synthetic and real-world benchmark datasets, with transductive and inductive experiments for learning various graph algorithms.
The algorithm also scales well up to 100m nodes with much less training effort.
Future work includes investigation in learning more complicated graph algorithms, as well as distributed training.