DARTS: Differentiable Architecture Search

ICLR 2019 8/19/2020

Motivation

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.

The best existing architecture search algorithms are computationally demanding despite their remark- able performance.

Conventional approaches over a discrete and non-differentiable search space (obtaining a state-of-the-art architecture for CIFAR-10 and ImageNet):

  • Evolution: 3150 GPU days

  • reinforcement learning: 2000 GPU days

Method in the paper:

  • based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.

  • a novel algorithm for differentiable network architecture search based on bilevel optimization, which is applicable to both convolutional and recurrent architectures.

  • remarkable efficiency improvement (reducing the cost of architecture discovery to a few GPU days)

GOAL: find the optimal operation on each edge

Cell: A cell is a directed acyclic graph consisting of an ordered sequence of N nodes. For example, [0,1,2] in Fig.1

Node: each node x(i)x^{(i)} is a latent representation (e.g. a feature map in CNN)

Edge: each directed edge (i,j)(i,j) is associated with some operation o(i,j)o^{(i,j)} that transforms x(i)x^{(i)}

Note: candidate operations between two edges:

  • convolution

  • max pooling

  • zero operation: a lack of connection between two nodes

Assumption: the cell has two input nodes and one single output node.

  • input nodes:

    • For convolutional cells: the input nodes are defined as the cell outputs in the previous two layers.

      • example: cell [0,1,2], 0,1:input nodes, 2: output node

    • For recurrent cells: input nodes are the input at the current step and the state carried from the previous step.

  • output:

    • the output of the cell is obtained by applying a reduction operation (e.g., concatenation) to all the intermediate nodes.

Continuous relaxation

To make the search space continuous, DARTS relaxes the categorical choice of a particular operation as a softmax over all the operations and the task of architecture search is reduced to learn a set of mixing probabilities α={α(i,j)}.\alpha = \{\alpha^{(i,j)}\}.

oˉ(i,j)(x)=oOexp(αo(i,j))oOexp(αo(i,j))o(x)\bar{o}^{(i, j)}(x)=\sum_{o \in \mathcal{O}} \frac{\exp \left(\alpha_{o}^{(i, j)}\right)}{\sum_{o^{\prime} \in \mathcal{O}} \exp \left(\alpha_{o^{\prime}}^{(i, j)}\right)} o(x)

where αo(i,j)\alpha_{o}^{(i,j)} is a vector of dimension O|\mathcal{O}|, containing weights between nodes ii and jj over different operations.

The bilevel optimization exists as we want to optimize both the network weights ww and the architecture representation α\alpha:

minαLval(w(α),α) s.t. w(α)=argminwLtrain(w,α)\begin{array}{ll} \min _{\alpha} & \mathcal{L}_{v a l}\left(w^{*}(\alpha), \alpha\right) \\ \text { s.t. } & w^{*}(\alpha)=\operatorname{argmin}_{w} \mathcal{L}_{\operatorname{train}}(w, \alpha) \end{array}

Approximation

Due to the expensive inner optimization, evaluating the architecture gradient exactly can be prohibitive. A simple approximation scheme was proposed:

αLval(w(α),α)αLval(wξwLtrain(w,α),α)=αLval(w,α)dwdαwLval(w,α)=αLval(w,α)ξα,w2Ltrain(w,α)wLval(w,α)where w=wξwLtrain(w,α)\begin{aligned} & \nabla_{\alpha} \mathcal{L}_{v a l}\left(w^{*}(\alpha), \alpha\right) \\ \approx & \nabla_{\alpha} \mathcal{L}_{v a l}\left(w-\xi \nabla_{w} \mathcal{L}_{train}(w, \alpha), \alpha\right) \\ =& \nabla_{\alpha} \mathcal{L}_{val}(w',\alpha) - \frac{d w'}{d \alpha} \nabla_{w'}\mathcal{L}_{val}(w',\alpha) \\ =& \nabla_{\alpha} \mathcal{L}_{val}(w',\alpha) - \xi \nabla_{\alpha,w}^2\mathcal{\mathcal{L}_{train}}(w,\alpha)\nabla_{w'}\mathcal{L}_{val}(w',\alpha) \end{aligned} \text{where } w'=w-\xi \nabla_{w}\mathcal{L}_{train}(w,\alpha)

Fortunately, the complexity can be substantially reduced using the finite difference approximation:

  • ϵ\epsilon is a small scalar

  • w±=w±ϵwLval(w,α)w^{\pm}=w\pm \epsilon \nabla_{w'}\mathcal{L}_{val}(w',\alpha)

α,w2Ltrain(w,α)wLval(w,α)αLtrain(w+,α)αLtrain(w,α)w+wwLval(w,α)=αLtrain(w+,α)αLtrain(w,α)2ϵwLval(w,α)wLval(w,α)=αLtrain(w+,α)αLtrain(w,α)2ϵ\begin{aligned} & \nabla_{\alpha, w}^{2} \mathcal{L}_{t r a i n}(w, \alpha) \nabla_{w^{\prime}} \mathcal{L}_{v a l}\left(w^{\prime}, \alpha\right) \\ \approx & \frac{\nabla_{\alpha} \mathcal{L}_{train}\left(w^{+}, \alpha\right)-\nabla_{\alpha} \mathcal{L}_{train}\left(w^{-}, \alpha\right)}{w^{+}-w^{-}}\nabla_{w^{\prime}} \mathcal{L}_{v a l}\left(w^{\prime}, \alpha\right)\\ = & \frac{\nabla_{\alpha} \mathcal{L}_{train}\left(w^{+}, \alpha\right)-\nabla_{\alpha} \mathcal{L}_{train}\left(w^{-}, \alpha\right)}{2\epsilon \nabla_{w'}\mathcal{L}_{val}(w',\alpha)}\nabla_{w^{\prime}} \mathcal{L}_{v a l}\left(w^{\prime}, \alpha\right)\\ = & \frac{\nabla_{\alpha} \mathcal{L}_{train}\left(w^{+}, \alpha\right)-\nabla_{\alpha} \mathcal{L}_{train}\left(w^{-}, \alpha\right)}{2 \epsilon} \end{aligned}

Questions:

"A cell is a directed acyclic graph consisting of an ordered sequence of N nodes"

How to decide the ordered sequence in practice?

The nodes are ordered randomly at first.

How are Equ. (7) and (8) derived?

Please see the above Approximation section.

To form each node in the discrete architecture, we retain the top-k strongest operations (from distinct nodes) among all non-zero candidate operations collected from all the previous nodes.

is kk a hyperparameter?

Yes. This is a hyperparameter.

Reference:

Last updated

Was this helpful?