Conditional Neural Progress (CNPs)
ICML 18 10-23-2020
Last updated
ICML 18 10-23-2020
Last updated
CNPs combine benefits of NNs and GPs:
the flexibility of stochastic processes such as GPs
structured as NNs and trained via Gradient Descent from data directly
we have a function with input and output .
is drawn from , a distribution over functions.
Define two sets:
Observations:
Targets:
Our goal is: given some observations, we want to be able to make predictions at unseen target inputs at test time. Just like supervise learning.
The architecture of our model captures this task:
are the representations of the pairs
is the overall representation obtained by summing all
and are NNs
parametrizes the output distribution (either a Gaussian or a categorical distribution)
CNPs are conditional distributions over functions trained to model the empirical conditional distributions of functions .
CNPs are permutation invariant in O and T.
scalable, achieving a running time complexity of for making m predictions with n observations.