-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH [AutoQuantizer
]: enhance trainer + not supported quant methods
#28991
ENH [AutoQuantizer
]: enhance trainer + not supported quant methods
#28991
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great - thanks for this rework!
Just a small nit to add a version in the deprecation message
src/transformers/modeling_utils.py
Outdated
logger.warning( | ||
"`_is_quantized_training_enabled` is going to be deprecated in a future version. Please use `model.hf_quantizer.is_trainable` instead", | ||
FutureWarning, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice and safe :) Tbh, with private methods we can probably get away with no deprecation warning as it's not public.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah in my first commits I just removed that private attribute, but after giving it some thoughts I realised maybe better to go for that option just to be on the safe zone
src/transformers/modeling_utils.py
Outdated
@property | ||
def _is_quantized_training_enabled(self): | ||
logger.warning( | ||
"`_is_quantized_training_enabled` is going to be deprecated in a future version. Please use `model.hf_quantizer.is_trainable` instead", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Message should have a specific version listed here for deprecation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok perfect, I will set it on the next 2 minor releases
…uggingface#28991) * enhance trainer + not support quant methods * remove all old logic * add version
…28991) * enhance trainer + not support quant methods * remove all old logic * add version
What does this PR do?
Currently, if a quantization method do not support PEFT fine-tuning an old error with bistandbytes is raised:
Regardless of the quantization method. In fact, for example if one uses AWQ + Trainer (which is not supported yet, but soon with #28987 / huggingface/peft#1399), they'll get the old error which is very confusing.
Moreover, we should rely on the variable
hf_quantizer.is_trainable
instead of_is_quantized_training_enabled
We should instead be more precise and throw an error that states why this is not supported and how to request a fix.
cc @amyeroberts