Skip to content

Commit

Permalink
Remove experimental flag for per-channel FC quantization (tensorflow#…
Browse files Browse the repository at this point in the history
…2482)

Per-channel quantization in fully connected layers are still not supported by TFLM, but the converter now has proper support so we can remove the flag.

BUG=cl/610755484
  • Loading branch information
rascani authored Feb 28, 2024
1 parent 20fd5b2 commit 9adee08
Showing 1 changed file with 0 additions and 4 deletions.
4 changes: 0 additions & 4 deletions tensorflow/lite/micro/tools/requantize_flatbuffer_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,6 @@ def convert_tfl_converter(keras_model,
EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
]
converter.representative_dataset = representative_dataset_gen
# TODO(b/324385802): Disable per channel quantization in FC layers (currently
# default behaviour) since it's not yet supported in TFLM.
converter._experimental_disable_per_channel_quantization_for_dense_layers = ( # pylint: disable=protected-access
True)
return converter.convert()


Expand Down

0 comments on commit 9adee08

Please sign in to comment.