📒
PaperNotes
  • PAPER NOTES
  • Meta-Learning with Implicit Gradient
  • DARTS: Differentiable Architecture Search
  • Meta-Learning of Neural Architectures for Few-Shot Learning
  • Towards Fast Adaptation of Neural Architectures with Meta Learning
  • Editable Neural Networks
  • ANIL (Almost No Inner Loop)
  • Meta-Learning Representation for Continual Learning
  • Learning to learn by gradient descent by gradient descent
  • Modular Meta-Learning with Shrinkage
  • NADS: Neural Architecture Distribution Search for Uncertainty Awareness
  • Modular Meta Learning
  • Sep
    • Incremental Few Shot Learning with Attention Attractor Network
    • Learning Steady-States of Iterative Algorithms over Graphs
      • Experiments
    • Learning combinatorial optimization algorithms over graphs
    • Meta-Learning with Shared Amortized Variational Inference
    • Concept Learners for Generalizable Few-Shot Learning
    • Progressive Graph Learning for Open-Set Domain Adaptation
    • Probabilistic Neural Architecture Search
    • Large-Scale Long-Tailed Recognition in an Open World
    • Learning to stop while learning to predict
    • Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
    • Learning to Generalize: Meta-Learning for Domain Generalization
  • Oct
    • Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
    • Network Architecture Search for Domain Adaptation
    • Continuous Meta Learning without tasks
    • Learning Causal Models Online
    • Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
    • Conditional Neural Progress (CNPs)
    • Reviving and Improving Recurrent Back-Propagation
    • Meta-Q-Learning
    • Learning Self-train for semi-supervised few shot classification
    • Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards
  • Nov
    • Neural Process
    • Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
    • Learning to Adapt to Evolving Domains
  • Tutorials
    • Relax constraints to continuous
    • MAML, FO-MAML, Reptile
    • Gradient Descent
      • Steepest Gradient Descent
      • Conjugate Gradient Descent
  • KL, Entropy, MLE, ELBO
  • Coding tricks
    • Python
    • Pytorch
  • ml
    • kmeans
Powered by GitBook
On this page
  • Motivation
  • Goal : Global Black-Box Optimization problem
  • MetaBO
  • Reference

Was this helpful?

  1. Oct

Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization

ICLR 20 10-2-2020

PreviousLearning to Generalize: Meta-Learning for Domain GeneralizationNextNetwork Architecture Search for Domain Adaptation

Last updated 4 years ago

Was this helpful?

Motivation

NOTE: read this paper again in details. It looks a novel direction

Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the field of global black-box optimization. (how about not black box?)

limitation of current method:

Readily available algorithms are typically designed to be universal optimizers and, therefore, often suboptimal for specific tasks.

Method in the paper:

We propose a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing our algorithm to utilize the proven generalization capabilities of Gaussian processes surrogate model.

Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency.

Goal : Global Black-Box Optimization problem

Usually Bayesian Optimization (BO) is used for the problem

  • Probabilistic surrogate model (e.g., GP) to interpolate between data points

  • Sampling strategy (acquisition function, AF) based on surrogate model

Transfer learning is used to increase the data-efficiency by transferring knowledge across task instances

MetaBO

Retain the proven structure of BO, keep the powerful GP surrogate model

Replace the AF part with neural AF to obtain task-specific AFs by transfer learning

Train neural AFs using RL: so no need for gradients of f∈Ff\in \mathcal{F}f∈F

Reference

https://openreview.net/forum?id=ryeYpJSKwr
https://iclr.cc/virtual_2020/poster_ryeYpJSKwr.html
https://metalearning-cvpr2019.github.io/assets/CVPR_2019_Metalearning_Tutorial_Frank_Hutter.pdf