Authors: Haoran Chen, Zuxuan Wu, Xintong Han, Menglin Jia, Yu-Gang Jiang
To address the stability-plasticity dilemma of continual learning, we propose a prompt-tuning-based method termed PromptFusion to enable the decoupling of stability and plasticity. Specifically, PromptFusion consists of a carefully designed Stabilizer module that deals with catastrophic forgetting and a Booster module to learn new knowledge concurrently. Furthermore, to address the computational overhead brought by the additional architecture, we propose PromptFusion-Lite which improves PromptFusion by dynamically determining whether to activate both modules for each input image.
git clone https://github.com/HaoranChen/PromptFusion.git
cd PromptFusion
-
Edit the json files for global settings and hyperparameters.
-
Run:
python main.py --config=./config/[MODEL NAME].json
Part of this repository is built upon LAMDA-PILOT, thanks for the well-organized codebase.
Feel free to contact us if you have any questions or suggestions Email: [email protected]
If you use our code in this repo or find our work helpful, please consider giving a citation:
@article{promptfusion,
title={Promptfusion: Decoupling stability and plasticity for continual learning},
author={Chen, Haoran and Wu, Zuxuan and Han, Xintong and Jia, Menglin and Jiang, Yu-Gang},
journal={ECCV},
year={2024}
}