NeurIPS 19 11/3/2020
Motivation
This paper is not hard to understand, the slide: (https://drive.google.com/file/d/151ZyvJK77nPJ36LA2gdk3S--8caXS43-/viewarrow-up-right) has been very clear.
Notations:
Φss\Phi_{ss}Φss​ : feature extractor of base learner, one of meta-parameters
θ\thetaθ : final layer classifier of base learner
θ′\theta'θ′ : initialization parameters of θ\thetaθ , one of meta-parameters
Φswn\Phi_{swn}Φswn​ : weights of soft-weighting network, one of meta-parameters
Inner loop:
Pseudo-labeling
Cherry-picking: (hard selection, soft weighting)
Self-training: re-training (S+R), fine-tuning (S)
Outer loop:
update Φswn\Phi_{swn}Φswn​ after re-training using the validation loss on query set based on θm\theta_mθm​
update [Φss,θ′][\Phi_{ss}, \theta'][Φss​,θ′] after fine-tuning using the validation loss on query set based on θT\theta_TθT​
Reference
https://papers.nips.cc/paper/9216-learning-to-self-train-for-semi-supervised-few-shot-classification.pdfarrow-up-right
https://github.com/xinzheli1217/learning-to-self-trainarrow-up-right
https://drive.google.com/file/d/151ZyvJK77nPJ36LA2gdk3S--8caXS43-/viewarrow-up-right
https://qianrusun.com/arrow-up-right
Last updated 5 years ago