Probabilistic Neural Architecture Search

Motivation

Most existing methods of NAS cannot be directly applied to large scale problems because of their prohibitive computational complexity or high memory usage.

This paper proposes a Probabilistic approach to NAS (PARSEC) that drastically reduces memory requirements while maintaining SOTA computational complexity, making it possible to directly search over more complex architectures and larger datasets.

  • a memory-efficient sampling procedure wherein we learn a probability distribution over high-performing neural network architectures.

  • Importantly, this framework enables us to transfer the distribution of architectures learnt on smaller problems to larger ones, further reducing the computational cost.

Importance-weighted Monte Carlo empirical Bayes

  • p(απ):p(\alpha|\pi): a prior on the choices of inputs and operations that define the cell, where hyper-parameters π\pi are the probabilities corresponding to the different choices.

  • y:targety: \text{target}

  • X:input\mathbf{X}: \text{input}

  • v:v: network weights

Given the estimator:

Reference

Last updated

Was this helpful?