📒
PaperNotes
  • PAPER NOTES
  • Meta-Learning with Implicit Gradient
  • DARTS: Differentiable Architecture Search
  • Meta-Learning of Neural Architectures for Few-Shot Learning
  • Towards Fast Adaptation of Neural Architectures with Meta Learning
  • Editable Neural Networks
  • ANIL (Almost No Inner Loop)
  • Meta-Learning Representation for Continual Learning
  • Learning to learn by gradient descent by gradient descent
  • Modular Meta-Learning with Shrinkage
  • NADS: Neural Architecture Distribution Search for Uncertainty Awareness
  • Modular Meta Learning
  • Sep
    • Incremental Few Shot Learning with Attention Attractor Network
    • Learning Steady-States of Iterative Algorithms over Graphs
      • Experiments
    • Learning combinatorial optimization algorithms over graphs
    • Meta-Learning with Shared Amortized Variational Inference
    • Concept Learners for Generalizable Few-Shot Learning
    • Progressive Graph Learning for Open-Set Domain Adaptation
    • Probabilistic Neural Architecture Search
    • Large-Scale Long-Tailed Recognition in an Open World
    • Learning to stop while learning to predict
    • Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
    • Learning to Generalize: Meta-Learning for Domain Generalization
  • Oct
    • Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
    • Network Architecture Search for Domain Adaptation
    • Continuous Meta Learning without tasks
    • Learning Causal Models Online
    • Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
    • Conditional Neural Progress (CNPs)
    • Reviving and Improving Recurrent Back-Propagation
    • Meta-Q-Learning
    • Learning Self-train for semi-supervised few shot classification
    • Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards
  • Nov
    • Neural Process
    • Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
    • Learning to Adapt to Evolving Domains
  • Tutorials
    • Relax constraints to continuous
    • MAML, FO-MAML, Reptile
    • Gradient Descent
      • Steepest Gradient Descent
      • Conjugate Gradient Descent
  • KL, Entropy, MLE, ELBO
  • Coding tricks
    • Python
    • Pytorch
  • ml
    • kmeans
Powered by GitBook
On this page
  • 1. Algorithm-learning: Connectivity Detection
  • Transductive setting:
  • 2. Algorithm-learning: PageRank
  • 1) Real-world graphs: (Blogcatalog, and Pubmed graphs)
  • 2) Barabasi-Albert random graphs
  • 3. Algorithm-learning: Mean Field Inference on Graphical Model
  • 4. Application: Node Classification
  • Transductive setting:
  • Inductive setting:
  • 5. Scalability
  • 1) time per update
  • 2) convergence
  • Conclusion

Was this helpful?

  1. Sep
  2. Learning Steady-States of Iterative Algorithms over Graphs

Experiments

PreviousLearning Steady-States of Iterative Algorithms over GraphsNextLearning combinatorial optimization algorithms over graphs

Last updated 4 years ago

Was this helpful?

We compare our proposed algorithm with some of the GNN variants who have the fixed finite number of propagations T.

Dtrain={Xtr,ytr}\mathcal{D}^{train}=\{\mathbf{X}^{tr},\mathbf{y}^{tr}\} Dtrain={Xtr,ytr} Dtest={Xts}\mathcal{D}^{test}=\{\mathbf{X}^{ts}\}Dtest={Xts}

Inductive Learning: Xts\mathbf{X}^{ts}Xts will not be used during training.

Transductive Learning: Xts\mathbf{X}^{ts}Xts will be used during training.

more:

1. Algorithm-learning: Connectivity Detection

Task: identify the component ID for a certain node.

Transductive setting:

  • training: 10% nodes with labels

  • testing : the rest

2. Algorithm-learning: PageRank

1) Real-world graphs: (Blogcatalog, and Pubmed graphs)

Transductive setting:

  • training: varying the training set size from 10%-90% of the total nodes

  • testing : reserve 10% nodes for held-out evaluation

  • SSE

  • S2V-degree

  • GCN

  • structure2vec

2) Barabasi-Albert random graphs

Transductive setting:

  • split the nodes equally into training and test set

Inductive setting:

  • the training is performed in a single graph, while the algorithm is asked to generalize to new graphs from the same distribution

3. Algorithm-learning: Mean Field Inference on Graphical Model

Task: learn the mean-field score for each node over a 128*128 lattice with xv\mathbf{x}_vxv​ set to be binary with a Gaussian perturbation.

(Figure 3c)

4. Application: Node Classification

Transductive setting:

Inductive setting:

5. Scalability

  • efficient for large-scale graphs in terms of both convergence speed and execution time

1) time per update

2) convergence

  • faster than others

Conclusion

we presented SSE, an algorithm that can learn many steady-state algorithms over graphs.

Different from graph neural network family models, SSE is trained stochastically which only requires 1-hop information, but can capture fixed point relationships efficiently and effectively.

We demonstrate this in both synthetic and real-world benchmark datasets, with transductive and inductive experiments for learning various graph algorithms.

The algorithm also scales well up to 100m nodes with much less training effort.

Future work includes investigation in learning more complicated graph algorithms, as well as distributed training.

https://www.wikiwand.com/en/Transduction_(machine_learning)