Skip to content

HaoranChen/PromptFusion

Repository files navigation

PromptFusion: Decoupling Stability and Plasticity for Continual Learning

📑Paper ✒️BibTex

Authors: Haoran Chen, Zuxuan Wu, Xintong Han, Menglin Jia, Yu-Gang Jiang

🔍 Overview

To address the stability-plasticity dilemma of continual learning, we propose a prompt-tuning-based method termed PromptFusion to enable the decoupling of stability and plasticity. Specifically, PromptFusion consists of a carefully designed Stabilizer module that deals with catastrophic forgetting and a Booster module to learn new knowledge concurrently. Furthermore, to address the computational overhead brought by the additional architecture, we propose PromptFusion-Lite which improves PromptFusion by dynamically determining whether to activate both modules for each input image.

🔧 Usage

install

git clone https://github.com/HaoranChen/PromptFusion.git
cd PromptFusion

run experiment

  1. Edit the json files for global settings and hyperparameters.

  2. Run:

    python main.py --config=./config/[MODEL NAME].json

👏 Acknowledgement

Part of this repository is built upon LAMDA-PILOT, thanks for the well-organized codebase.

Contact

Feel free to contact us if you have any questions or suggestions Email: [email protected]

✒️ Citation

If you use our code in this repo or find our work helpful, please consider giving a citation:

@article{promptfusion,
  title={Promptfusion: Decoupling stability and plasticity for continual learning},
  author={Chen, Haoran and Wu, Zuxuan and Han, Xintong and Jia, Menglin and Jiang, Yu-Gang},
  journal={ECCV},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages