Learning to learn by gradient descent by gradient descent

NIPS 2016 8-24-2020

Motivation

The current optimization algorithms are still designed by hand. This paper shows how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way.

Idea:

meta learning to learn optimization method (like gradient descent)

This paper tries to replace the optimizers normally used for neural networks (eg Adam, RMSprop, SGD etc.) by a recurrent neural network (RNN). Gradient descent is fundamentally a sequence of updates (from the output layer of the neural net back to the input), in between which a state must be stored. We can think of an optimizer as a mini-RNN. The idea in this paper is to actually train that RNN instead of using a generic algorithm like Adam/SGD/etc..

There are 2 distinct neural nets, or parameterized functions. The first one is the task specific neural net, or the optimizee. This the neural network that performs the original task at hand. This task can be anything ranging from regression to image classification. The weights of this neural network is updated by another neural network, called the optimizer.

The loss of the optimizer is the sum of the losses of the optimizee as it learns. The paper includes some notion of weighing but gives a weight of 1 to everything, so that it indeed is just the sum.

where

The loss of the optimizer neural net is simply the average training loss of the optimizee as it is trained by the optimizer. The optimizer takes in the gradient of the current coordinate of the optimizee as well as its previous state, and outputs a suggested update that we hope will reduce the optimizee’s loss as fast as possible.

NOTE:

use meta learning to learn algorithms (like in anomly detection algorithm)

connection to other papers: like steepest gradient descent previously presented by Xiaofan as well.

Reference

Last updated