Skip to content

Official PyTorch implementation of our ECCV2024 paper “Rethinking Few-shot Class-incremental Learning: Learning from Yourself”

License

Notifications You must be signed in to change notification settings

iSEE-Laboratory/Revisting_FSCIL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rethinking Few-shot Class-incremental Learning: Learning from Yourself (ECCV2024)

Official PyTorch implementation of our ECCV2024 paper “Rethinking Few-shot Class-incremental Learning: Learning from Yourself”. [Paper]

Introduction

TL;DR

We proposed a novel metric for a more balanced evaluation of Few-Shot Class-incremental Learning (FSCIL) methods. Further, we also provide analyses of Vision Transformers(ViT) on FSCIL and design the feature rectification module learning from intermediate features.

Environments

  • Python: 3.8.17
  • PyTorch: 2.0.1
  • timm: 0.5.4

Data Preparation

We follow prior works to conduct experiments on three standard datasets: CIFAR100, mini/ImageNet, and CUB200.

Download Datasets

  • CIFAR100 dataset will be downloaded automatically to the directory specified by the arg -dataroot.

  • mini/ImageNet and CUB200 datasets cannot be downloaded automatically, we follow the CEC, and here is the download link copied from their repo.

After downloading, please put all datasets into the ./data directory.

Training

  • Step1, download checkpoints trained on the base task (task 1) from these URLs: (Baidu(password=0000), Google or OneDrive) and put them into the checkpoints path. Note that if you want to train on task 1, please refer to the Base task training section.
  • Step2, just run bash runs/cifar100.sh for cifar100 training, runs/miniImageNet.sh for mini/ImageNet training and bash runs/cub200.sh for CUB200 training.
     bash runs/cifar100.sh exp_name gpu_id
    

Evaluation

We proposed a novel evaluation metric called generalized average accuracy (gAcc), which provides a more balanced assessment of FSCIL methods. The codes for gAcc is the generalised_avg_acc() function in models/metric.py, which inputs the range of the parameter $\alpha$ and the accuracy at each task. By default, we show gAcc and aAcc of our method after training of each task, feel free to use this metric for any other methods!

Base Task Training (optional)

We also provide codes for training our ViT backbone on the base task (task 1, with 60 classes). To train, please cd baseline_inc and check the README.md in ./baseline_inc.

Acknowledgement

  • This repository is heavily based on CEC and deit.

  • If you use this paper/code in your research, please consider citing us:

@inproceedings{tang2024rethinking,
  title={Rethinking Few-shot Class-incremental Learning: Learning from Yourself},
  author={Tang, Yu-Ming and Peng, Yi-Xing and Meng, Jingke and Zheng, Wei-Shi},
  booktitle={European Conference on Computer Vision},
  year={2024}
}

About

Official PyTorch implementation of our ECCV2024 paper “Rethinking Few-shot Class-incremental Learning: Learning from Yourself”

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published