This repository gives the implementation for complementary-label learning from the ICML 2019 paper [1], the ECCV 2018 paper [2], and the NeurIPS 2017 paper [3].
- Python 3.6
- numpy 1.14
- PyTorch 1.1
- torchvision 0.2
The following demo will show the results with the MNIST dataset. After running the code, you should see a text file with the results saved in the same directory. The results will have three columns: epoch number, training accuracy, and test accuracy.
python demo.py -h
In demo.py
, specify the method
argument to choose one of the 5 methods available:
ga
: Gradient ascent version (Algorithm 1) in [1].nn
: Non-negative risk estimator with the max operator in [1].free
: Assumption-free risk estimator based on Theorem 1 in [1].forward
: Forward correction method in [2].pc
: Pairwise comparison with sigmoid loss in [3].
Specify the model
argument:
linear
: Linear modelmlp
: Multi-layer perceptron with one hidden layer (500 units)
- T. Ishida, G. Niu, A. K. Menon, and M. Sugiyama.
Complementary-label learning for arbitrary losses and models.
In ICML 2019.
[paper] - Yu, X., Liu, T., Gong, M., and Tao, D.
Learning with biased complementary labels.
In ECCV 2018.
[paper] - T. Ishida, G. Niu, W. Hu, and M. Sugiyama.
Learning from complementary labels.
In NeurIPS 2017.
[paper]
If you have any further questions, please feel free to send an e-mail to: ishida at ms.k.u-tokyo.ac.jp.