Conditional Neural Progress (CNPs)
ICML 18 10-23-2020
Last updated
Was this helpful?
ICML 18 10-23-2020
Last updated
Was this helpful?
CNPs combine benefits of NNs and GPs:
the flexibility of stochastic processes such as GPs
structured as NNs and trained via Gradient Descent from data directly
Define two sets:
Our goal is: given some observations, we want to be able to make predictions at unseen target inputs at test time. Just like supervise learning.
The architecture of our model captures this task:
CNPs are permutation invariant in O and T.
we have a function with input and output .
is drawn from , a distribution over functions.
Observations:
Targets:
are the representations of the pairs
is the overall representation obtained by summing all
and are NNs
parametrizes the output distribution (either a Gaussian or a categorical distribution)
CNPs are conditional distributions over functions trained to model the empirical conditional distributions of functions .
scalable, achieving a running time complexity of for making m predictions with n observations.