# DARTS: Differentiable Architecture Search

## Motivation

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.&#x20;

The best existing architecture search algorithms are computationally demanding despite their remark- able performance.

Conventional approaches over a discrete and non-differentiable search space (obtaining a state-of-the-art architecture for CIFAR-10 and ImageNet):

* Evolution: 3150 GPU days
* reinforcement learning: 2000 GPU days&#x20;

Method in the paper:&#x20;

* based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.
* a novel algorithm for differentiable network architecture search based on bilevel optimization, which is applicable to both convolutional and recurrent architectures.
* remarkable efficiency improvement (reducing the cost of architecture discovery to a few GPU days)

## Differentiable Architecture Search

![GOAL: find the optimal operation on each edge](/files/-MFGh5w3OrydDNdj7jgE)

Cell: A cell is a directed acyclic graph consisting of an ordered sequence of N nodes. For example, \[0,1,2] in Fig.1

Node: each node $$x^{(i)}$$ is a latent representation (e.g. a feature map in CNN)

Edge: each directed edge $$(i,j)$$ is associated with some operation $$o^{(i,j)}$$ that transforms $$x^{(i)}$$&#x20;

**Note: candidate operations between two edges:**

* convolution
* max pooling
* zero operation: a lack of connection between two nodes

**Assumption**: the cell has two input nodes and one single output node.

* input nodes:
  * For convolutional cells: the input nodes are defined as the cell outputs in the previous two layers.&#x20;
    * example: cell \[0,1,2], 0,1:input nodes, 2: output node
  * For recurrent cells:  input nodes are the input at the current step and the state carried from the previous step.
* output:

  * the output of the cell is obtained by applying a reduction operation (e.g., concatenation) to all the intermediate nodes.

### Continuous relaxation

To make the search space continuous, DARTS relaxes the categorical choice of a particular operation as a softmax over all the operations and the task of architecture search is reduced to learn a set of mixing probabilities $$\alpha = {\alpha^{(i,j)}}.$$

$$
\bar{o}^{(i, j)}(x)=\sum\_{o \in \mathcal{O}} \frac{\exp \left(\alpha\_{o}^{(i, j)}\right)}{\sum\_{o^{\prime} \in \mathcal{O}} \exp \left(\alpha\_{o^{\prime}}^{(i, j)}\right)} o(x)
$$

where $$\alpha\_{o}^{(i,j)}$$ is a vector of dimension $$|\mathcal{O}|$$, containing weights between nodes $$i$$ and $$j$$ over different operations.

The bilevel optimization exists as we want to optimize both the network weights $$w$$ and the architecture representation $$\alpha$$:

$$
\begin{array}{ll}
\min *{\alpha} & \mathcal{L}*{v a l}\left(w^{*}(\alpha), \alpha\right) \\
\text { s.t. } & w^{*}(\alpha)=\operatorname{argmin}*{w} \mathcal{L}*{\operatorname{train}}(w, \alpha)
\end{array}
$$

### Approximation

Due to the expensive inner optimization, evaluating the architecture gradient exactly can be prohibitive. A simple approximation scheme was proposed:

$$
\begin{aligned}
& \nabla\_{\alpha} \mathcal{L}*{v a l}\left(w^{\*}(\alpha), \alpha\right) \\
\approx & \nabla*{\alpha} \mathcal{L}*{v a l}\left(w-\xi \nabla*{w} \mathcal{L}*{train}(w, \alpha), \alpha\right) \\
\=& \nabla*{\alpha} \mathcal{L}*{val}(w',\alpha) - \frac{d w'}{d \alpha} \nabla*{w'}\mathcal{L}*{val}(w',\alpha) \\
\=& \nabla*{\alpha} \mathcal{L}*{val}(w',\alpha) - \xi \nabla*{\alpha,w}^2\mathcal{\mathcal{L}*{train}}(w,\alpha)\nabla*{w'}\mathcal{L}*{val}(w',\alpha)
\end{aligned}
\text{where  } w'=w-\xi \nabla*{w}\mathcal{L}\_{train}(w,\alpha)
$$

Fortunately, the complexity can be substantially reduced using the finite difference approximation:

* $$\epsilon$$ is a small scalar
* $$w^{\pm}=w\pm \epsilon \nabla\_{w'}\mathcal{L}\_{val}(w',\alpha)$$&#x20;

$$
\begin{aligned}
& \nabla\_{\alpha, w}^{2} \mathcal{L}*{t r a i n}(w, \alpha) \nabla*{w^{\prime}} \mathcal{L}*{v a l}\left(w^{\prime}, \alpha\right) \\
\approx & \frac{\nabla*{\alpha} \mathcal{L}*{train}\left(w^{+}, \alpha\right)-\nabla*{\alpha} \mathcal{L}*{train}\left(w^{-}, \alpha\right)}{w^{+}-w^{-}}\nabla*{w^{\prime}} \mathcal{L}*{v a l}\left(w^{\prime}, \alpha\right)\\
\= & \frac{\nabla*{\alpha} \mathcal{L}*{train}\left(w^{+}, \alpha\right)-\nabla*{\alpha} \mathcal{L}*{train}\left(w^{-}, \alpha\right)}{2\epsilon \nabla*{w'}\mathcal{L}*{val}(w',\alpha)}\nabla*{w^{\prime}} \mathcal{L}*{v a l}\left(w^{\prime}, \alpha\right)\\
\= & \frac{\nabla*{\alpha} \mathcal{L}*{train}\left(w^{+}, \alpha\right)-\nabla*{\alpha} \mathcal{L}\_{train}\left(w^{-}, \alpha\right)}{2 \epsilon}
\end{aligned}
$$

## Questions:

{% tabs %}
{% tab title="Q1" %}
"A cell is a directed acyclic graph consisting of an ordered sequence of N nodes"

How to decide the ordered sequence in practice?

The nodes are ordered randomly at first.
{% endtab %}

{% tab title="Q2" %}
How to decide the intermediate nodes?

(I am not sure about the definition of intermediate node)

one cell has 2 input nodes, and 1 output node. So how to define the intermediate node? No intermediate node in each cell. It might be defined based on the whole architecture.
{% endtab %}

{% tab title="Q3" %}
Can we decompose ResNet into such cells, each of which has two input nodes, and one output node?

Yes.

This is the original ResNet block.

![](/files/-MFMC1YGi9Ok7ViWWiMM)

This is the cell figure.

![](/files/-MFMCR3vUJ5WYTUwo51t)
{% endtab %}

{% tab title="Q4" %}
"The output of the cell is obtained by applying a reduction operation (e.g. concatenation) to all the intermediate nodes."

Use an example to illustrate this.

![](/files/-MFM45kVYsrWk5gwu7HT)

Not very confident about this. What is the output of one cell? Each cell has three nodes, each of which has output. So the output of the cell might be the concatenation of output from these three nodes.

After watching the video again, c\_{k-1}, c\_{k-2}, c\_{k} were called nodes in one cell.

I am confused this. If c\_{k-1}, c\_{k-2}, c\_{k} are nodes, what are 0,1,2,3?&#x20;
{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="Q5+Q6" %}
How are Equ. (7) and (8) derived?

Please see the above **Approximation section.**
{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="Q7" %}
To form each node in the discrete architecture, we retain the top-k strongest operations (from distinct nodes) among all non-zero candidate operations collected from all the previous nodes.

is $$k$$ a hyperparameter?

Yes. This is a hyperparameter.
{% endtab %}

{% tab title="Q8" %}
what is zero operations?

a lack of connection between two nodes
{% endtab %}

{% tab title="Q9" %}
Please explain why batch normalization is effective for this?

will finish this after discussion with Xujiang and Tianhao.
{% endtab %}
{% endtabs %}

## Reference:

* <https://openreview.net/forum?id=S1eYHoC5FX>
* <https://lilianweng.github.io/lil-log/2020/08/06/neural-architecture-search.html>
* <https://github.com/quark0/darts>
* <https://towardsdatascience.com/investigating-differentiable-neural-architecture-search-for-scientific-datasets-62899be8714e>
* <http://torch.ch/blog/2016/02/04/resnets.html>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://lichangbin.gitbook.io/paper_notes/darts-differentiable-architecture-search.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
