Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
NeurIPS 20 11/9/2020
Motivation
FSL methods are highly vulnerable to adversarial examples.
The goal of this paper is to produce NN which both perform well at few-shot classification tasks and simultaneously robust to adversarial examples.
Reference
Last updated