📒
PaperNotes
  • PAPER NOTES
  • Meta-Learning with Implicit Gradient
  • DARTS: Differentiable Architecture Search
  • Meta-Learning of Neural Architectures for Few-Shot Learning
  • Towards Fast Adaptation of Neural Architectures with Meta Learning
  • Editable Neural Networks
  • ANIL (Almost No Inner Loop)
  • Meta-Learning Representation for Continual Learning
  • Learning to learn by gradient descent by gradient descent
  • Modular Meta-Learning with Shrinkage
  • NADS: Neural Architecture Distribution Search for Uncertainty Awareness
  • Modular Meta Learning
  • Sep
    • Incremental Few Shot Learning with Attention Attractor Network
    • Learning Steady-States of Iterative Algorithms over Graphs
      • Experiments
    • Learning combinatorial optimization algorithms over graphs
    • Meta-Learning with Shared Amortized Variational Inference
    • Concept Learners for Generalizable Few-Shot Learning
    • Progressive Graph Learning for Open-Set Domain Adaptation
    • Probabilistic Neural Architecture Search
    • Large-Scale Long-Tailed Recognition in an Open World
    • Learning to stop while learning to predict
    • Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
    • Learning to Generalize: Meta-Learning for Domain Generalization
  • Oct
    • Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
    • Network Architecture Search for Domain Adaptation
    • Continuous Meta Learning without tasks
    • Learning Causal Models Online
    • Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
    • Conditional Neural Progress (CNPs)
    • Reviving and Improving Recurrent Back-Propagation
    • Meta-Q-Learning
    • Learning Self-train for semi-supervised few shot classification
    • Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards
  • Nov
    • Neural Process
    • Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
    • Learning to Adapt to Evolving Domains
  • Tutorials
    • Relax constraints to continuous
    • MAML, FO-MAML, Reptile
    • Gradient Descent
      • Steepest Gradient Descent
      • Conjugate Gradient Descent
  • KL, Entropy, MLE, ELBO
  • Coding tricks
    • Python
    • Pytorch
  • ml
    • kmeans
Powered by GitBook
On this page
  • Motivation
  • Problem Statement
  • Bayesian Online Changepoint Detection (BOCPD)
  • Overview of MOCA
  • Meta-Learning via Online Changepoint Analysis
  • Reference

Was this helpful?

  1. Oct

Continuous Meta Learning without tasks

NeurIPS 20 10-9-2020

PreviousNetwork Architecture Search for Domain AdaptationNextLearning Causal Models Online

Last updated 4 years ago

Was this helpful?

Motivation

Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks.

However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task.

In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task.

We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme.

Problem Statement

We assume that we have access to a representative time series generated in the same manner from the same distribution of tasks, and use this time series to optimize in an offline, meta-training phase.

Critically, however, in stark contrast to standard meta-learning approaches, we do not assume access to task segmentation.

Moreover, we highlight that we consider the case of individual data points provided sequentially, in contrast to the common “k-shot, n-way” problem setting prevalent in few-shot learning (especially classification).

Bayesian Online Changepoint Detection (BOCPD)

We build on Bayesian online changepoint detection (Adams & MacKay, 2007), an approach for detecting changepoints (i.e. task switches) originally presented in a streaming unconditional density estimation context.

BOCPD operates by maintaining a belief distribution over run lengths, i.e. how many of the past data points yty_tyt​ correspond to the current task.

In this work, we extend this approach of Adams & MacKay (2007) beyond Bayesian unconditional density estimation to apply to general meta-learning models operating in the conditional density estimation setting.

Overview of MOCA

Meta-Learning via Online Changepoint Analysis

NOTE: Adding more about Connection with meta-learning, online learning, and continual learning

Reference

Details could be found from:

(latest version)

http://gregorygundersen.com/blog/2019/08/13/bocd/
https://arxiv.org/pdf/1912.08866.pdf