From 47c0a34406793532d86224bca8a9672287276697 Mon Sep 17 00:00:00 2001 From: Fanli Lin Date: Tue, 19 Nov 2024 01:58:50 +0800 Subject: [PATCH] [docs] add XPU besides CUDA, MPS etc. (#34777) add XPU --- docs/source/en/quantization/quanto.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/quantization/quanto.md b/docs/source/en/quantization/quanto.md index 18135b2ec2fc..f5bba54a6e6b 100644 --- a/docs/source/en/quantization/quanto.md +++ b/docs/source/en/quantization/quanto.md @@ -28,7 +28,7 @@ Try Quanto + transformers with this [notebook](https://colab.research.google.com - weights quantization (`float8`,`int8`,`int4`,`int2`) - activation quantization (`float8`,`int8`) - modality agnostic (e.g CV,LLM) -- device agnostic (e.g CUDA,MPS,CPU) +- device agnostic (e.g CUDA,XPU,MPS,CPU) - compatibility with `torch.compile` - easy to add custom kernel for specific device - supports quantization aware training