This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.
The best existing architecture search algorithms are computationally demanding despite their remark- able performance.
Conventional approaches over a discrete and non-differentiable search space (obtaining a state-of-the-art architecture for CIFAR-10 and ImageNet):
Evolution: 3150 GPU days
reinforcement learning: 2000 GPU days
Method in the paper:
based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.
a novel algorithm for differentiable network architecture search based on bilevel optimization, which is applicable to both convolutional and recurrent architectures.
remarkable efficiency improvement (reducing the cost of architecture discovery to a few GPU days)
Differentiable Architecture Search
Cell: A cell is a directed acyclic graph consisting of an ordered sequence of N nodes. For example, [0,1,2] in Fig.1
Node: each node x(i) is a latent representation (e.g. a feature map in CNN)
Edge: each directed edge (i,j) is associated with some operation o(i,j) that transforms x(i)
Note: candidate operations between two edges:
convolution
max pooling
zero operation: a lack of connection between two nodes
Assumption: the cell has two input nodes and one single output node.
input nodes:
For convolutional cells: the input nodes are defined as the cell outputs in the previous two layers.
For recurrent cells: input nodes are the input at the current step and the state carried from the previous step.
output:
the output of the cell is obtained by applying a reduction operation (e.g., concatenation) to all the intermediate nodes.
Continuous relaxation
To make the search space continuous, DARTS relaxes the categorical choice of a particular operation as a softmax over all the operations and the task of architecture search is reduced to learn a set of mixing probabilities α={α(i,j)}.
"A cell is a directed acyclic graph consisting of an ordered sequence of N nodes"
How to decide the ordered sequence in practice?
The nodes are ordered randomly at first.
How to decide the intermediate nodes?
(I am not sure about the definition of intermediate node)
one cell has 2 input nodes, and 1 output node. So how to define the intermediate node? No intermediate node in each cell. It might be defined based on the whole architecture.
Can we decompose ResNet into such cells, each of which has two input nodes, and one output node?
Yes.
This is the original ResNet block.
This is the cell figure.
"The output of the cell is obtained by applying a reduction operation (e.g. concatenation) to all the intermediate nodes."
Use an example to illustrate this.
Not very confident about this. What is the output of one cell? Each cell has three nodes, each of which has output. So the output of the cell might be the concatenation of output from these three nodes.
After watching the video again, c_{k-1}, c_{k-2}, c_{k} were called nodes in one cell.
I am confused this. If c_{k-1}, c_{k-2}, c_{k} are nodes, what are 0,1,2,3?
How are Equ. (7) and (8) derived?
Please see the above Approximation section.
To form each node in the discrete architecture, we retain the top-k strongest operations (from distinct nodes) among all non-zero candidate operations collected from all the previous nodes.
is k a hyperparameter?
Yes. This is a hyperparameter.
what is zero operations?
a lack of connection between two nodes
Please explain why batch normalization is effective for this?
will finish this after discussion with Xujiang and Tianhao.