Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strange behavior when quantizing a model. #1109

Open
IdrissARM opened this issue Jan 23, 2024 · 2 comments
Open

strange behavior when quantizing a model. #1109

IdrissARM opened this issue Jan 23, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@IdrissARM
Copy link

Hi all,

I was trying to quantize my model but something strange popped up.

I am using TensorFlow v2.14 and tfmot v0.7.5

I have a sub-classed tf.Keras.Model. It contains some custom layers and other standard layers such as concatenate, activation, etc..

I just want some specific layers to be quantized. For instance, I do not want concatenate to be quantized and 2 other layers. For that, I did not annotate them, so they are not an instance of QuantizeAnnotate.

But strangely, I see that concatenate is an instance of QuantizeWrapperV2. Although, I can also see it is not an instance of QuantizeAnnotate in my cloned function.

So, I do not understand why here :

we add to requires_output_quantize a layer that it is not an instance of QuantizeAnnotate rather then to check on isinstance as the name suggest? My layers that have not been annotated should be returned from here : but during debug, I see that concatenate and other layers, that were not annotated, were added to requires_output_quantize. I believe, this should be wrong.

Now, if we go to this - https://github.com/tensorflow/model-optimization/blob/e38d886935c9e2004f72522bf11573d43f46b383/tensorflow_model_optimization/python/core/quantization/keras/quantize.py#L418C28-L418C52 we can see that if the layer is not in requires_output_quantize and not in layer_quantize_map then we will just retun the layer. But requires_output_quantize holds layers that do NOT need to be quantized from what I am seeing and what this :

is suggesting.

So I would expect layer.name not in requires_output_quantize to be removed in this if statement:


according to my analysis.

In conclusion, I would expect requires_output_quantize to be buggy otherwise there is an explanation for that.

I really appreciate if someone takes the time to explain this. Maybe I am wrong.

I look forward to your feedback.

Thanks,
Idriss

@IdrissARM IdrissARM added the bug Something isn't working label Jan 23, 2024
@abattery
Copy link
Contributor

@Xhark can you take a look at this?

@IdrissARM
Copy link
Author

Any updates @Xhark, @abattery?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants