Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization

ICLR 20 10-2-2020

Motivation

NOTE: read this paper again in details. It looks a novel direction

Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the field of global black-box optimization. (how about not black box?)

limitation of current method:

Readily available algorithms are typically designed to be universal optimizers and, therefore, often suboptimal for specific tasks.

Method in the paper:

We propose a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing our algorithm to utilize the proven generalization capabilities of Gaussian processes surrogate model.

Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency.

Goal : Global Black-Box Optimization problem

Usually Bayesian Optimization (BO) is used for the problem

  • Probabilistic surrogate model (e.g., GP) to interpolate between data points

  • Sampling strategy (acquisition function, AF) based on surrogate model

Transfer learning is used to increase the data-efficiency by transferring knowledge across task instances

MetaBO

Retain the proven structure of BO, keep the powerful GP surrogate model

Replace the AF part with neural AF to obtain task-specific AFs by transfer learning

Train neural AFs using RL: so no need for gradients of f∈Ff\in \mathcal{F}

Reference

Last updated

Was this helpful?