Learning Self-train for semi-supervised few shot classification

NeurIPS 19 11/3/2020

Motivation

This paper is not hard to understand, the slide: (https://drive.google.com/file/d/151ZyvJK77nPJ36LA2gdk3S--8caXS43-/view) has been very clear.

Notations:

  • Φss\Phi_{ss} : feature extractor of base learner, one of meta-parameters

  • θ\theta : final layer classifier of base learner

  • θ′\theta' : initialization parameters of θ\theta , one of meta-parameters

  • Φswn\Phi_{swn} : weights of soft-weighting network, one of meta-parameters

Inner loop:

  1. Pseudo-labeling

  2. Cherry-picking: (hard selection, soft weighting)

  3. Self-training: re-training (S+R), fine-tuning (S)

Outer loop:

  1. update Φswn\Phi_{swn} after re-training using the validation loss on query set based on θm\theta_m

  2. update [Φss,θ′][\Phi_{ss}, \theta'] after fine-tuning using the validation loss on query set based on θT\theta_T

Reference

Last updated

Was this helpful?