Skip to content

eth-easl/deltazip

Repository files navigation

DeltaZip

Paper Documents License

DeltaZip is a system for compressing and serving full-parameter fine-tuned LLMs.

Abstract

Fine-tuning large language models (LLMs) for downstream tasks can greatly improve model quality, however serving many different fine-tuned LLMs concurrently for users in multi-tenant environments is challenging. Dedicating GPU memory for each model is prohibitively expensive and naively swapping large model weights in and out of GPU memory is slow. Our key insight is that fine-tuned models can be quickly swapped in and out of GPU memory by extracting and compressing the delta between each model and its pre-trained base model.

We propose DeltaZip, an LLM serving system that efficiently serves multiple full-parameter fine-tuned models concurrently by aggressively compressing model deltas by a factor of 6x to 8x while maintaining high model quality. DeltaZip increases serving throughput by 1.5x to 3x and improves SLO attainment compared to a vanilla HuggingFace serving system.

Quick Start

Stay tuned! We're updating the quick start :).

Acknowledgements

Heavily inspired by

Citation

If you found this code useful, please cite our paper:

@article{yao2023deltazip,
  title={DeltaZip: Multi-Tenant Language Model Serving via Delta Compression},
  author={Yao, Xiaozhe and Klimovic, Ana},
  journal={arXiv preprint arXiv:2312.05215},
  year={2023}
}
@inproceedings{
    isik2023gptzip,
    title={{GPT}-Zip: Deep Compression of Finetuned Large Language Models},
    author={Berivan Isik and Hermann Kumbong and Wanyi Ning and Xiaozhe Yao and Sanmi Koyejo and Ce Zhang},
    booktitle={Workshop on Efficient Systems for Foundation Models @ ICML2023},
    year={2023},
    url={https://openreview.net/forum?id=hO0c2tG2xL}
}

Related Projects

  • FMEngine: Utilities for training large language models.