Experiments

We compare our proposed algorithm with some of the GNN variants who have the fixed finite number of propagations T.

Dtrain={Xtr,ytr}\mathcal{D}^{train}=\{\mathbf{X}^{tr},\mathbf{y}^{tr}\} Dtest={Xts}\mathcal{D}^{test}=\{\mathbf{X}^{ts}\}

Inductive Learning: Xts\mathbf{X}^{ts} will not be used during training.

Transductive Learning: Xts\mathbf{X}^{ts} will be used during training.

more: https://www.wikiwand.com/en/Transduction_(machine_learning)

1. Algorithm-learning: Connectivity Detection

Task: identify the component ID for a certain node.

Transductive setting:

  • training: 10% nodes with labels

  • testing : the rest

2. Algorithm-learning: PageRank

1) Real-world graphs: (Blogcatalog, and Pubmed graphs)

Transductive setting:

  • training: varying the training set size from 10%-90% of the total nodes

  • testing : reserve 10% nodes for held-out evaluation

  • SSE

  • S2V-degree

  • GCN

  • structure2vec

2) Barabasi-Albert random graphs

Transductive setting:

  • split the nodes equally into training and test set

Inductive setting:

  • the training is performed in a single graph, while the algorithm is asked to generalize to new graphs from the same distribution

3. Algorithm-learning: Mean Field Inference on Graphical Model

Task: learn the mean-field score for each node over a 128*128 lattice with xv\mathbf{x}_v set to be binary with a Gaussian perturbation.

(Figure 3c)

4. Application: Node Classification

Transductive setting:

Inductive setting:

5. Scalability

  • efficient for large-scale graphs in terms of both convergence speed and execution time

1) time per update

2) convergence

  • faster than others

Conclusion

we presented SSE, an algorithm that can learn many steady-state algorithms over graphs.

Different from graph neural network family models, SSE is trained stochastically which only requires 1-hop information, but can capture fixed point relationships efficiently and effectively.

We demonstrate this in both synthetic and real-world benchmark datasets, with transductive and inductive experiments for learning various graph algorithms.

The algorithm also scales well up to 100m nodes with much less training effort.

Future work includes investigation in learning more complicated graph algorithms, as well as distributed training.

Last updated

Was this helpful?