Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use Quantize layer and use abstract class and methods #1122

Open
dhruven-god opened this issue Feb 26, 2024 · 1 comment
Open

Cannot use Quantize layer and use abstract class and methods #1122

dhruven-god opened this issue Feb 26, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@dhruven-god
Copy link

I am trying to Quantize the whole model but whenever, I try to load the model using quantized scope it gives me error like this

import sys, os
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import load_model
import tensorflow_model_optimization as tfmot
from tensorflow.keras.utils import CustomObjectScope

customObjects = {'DefaultQuantizeConfig': tfmot.quantization.keras.QuantizeConfig}       
with tfmot.quantization.keras.quantize_scope(customObjects):
    loaded_model = load_model('UpdtQuant.h5')
image

Also, when I try to define scope it gives me unknown value error: Quantize layer is not defined

Can someone help me with this issue?

@dhruven-god dhruven-god added the bug Something isn't working label Feb 26, 2024
@tucan9389 tucan9389 self-assigned this Mar 31, 2024
@tucan9389
Copy link
Member

@dhruven-god

Thanks for reporting.

Could you provide fully reproducible script? (colab is recommended or complete python script with tf, tfmot version specification is also fine)
If you can share your UpdtQuant.h5 model (or another model that can reproduce this error), it would be helpful.

@tucan9389 tucan9389 removed their assignment Jul 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants