Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SavedModel conversion to integer (quantized) TFLite reports error because of fused ops #36

Closed
jvuillaumier opened this issue Mar 18, 2022 · 3 comments
Assignees
Labels
enhancement New feature or request model support Add support for certain models that don't work with the current version

Comments

@jvuillaumier
Copy link

Hello,

Firstly, thank you for publishing and maintaining this lib.

Package versions used in my setup are tfjs-graph-converter 1.5.0 and tensorflow 2.8.0.

I have been experimenting the lib with MediaPipe Blazeface TFJS model from TensorFlow Hub:
https://tfhub.dev/tensorflow/tfjs-model/blazeface/1/default/1?tfjs-format=compressed
TFJS model archive blazeface_1_default_1.tar.gz is extracted to:

$ ls  ./blazeface_js/
group1-shard1of1.bin  model.json"

Conversion to SavedModel is then operated using lib CLI:

$ tfjs_graph_converter ./blazeface_js ./blazeface_savedmodel --output_format tf_saved_model

Resulting saved_model can be converted to float32 TFLite model with:

import tensorflow as tf

def representative_dummy_dataset():
    for i in range(100):
        yield [np.zeros(128*128*3).reshape(1,128,128,3).astype(np.float32)]

converter = tf.lite.TFLiteConverter.from_saved_model("./blazeface_savedmodel")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dummy_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_quant_model = converter.convert()

Conversion to float32 TFLite succeeds with notice:
TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s): Flex ops: Flex_FusedConv2D

Conversion to integer (quantized) version is done replacing converter supported_ops as below:

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]

This time conversion fails because _FusedConv2D operator does not exist in TFLITE INT8 operation set:

loc(fused["_FusedConv2D:", "StatefulPartitionedCall/model/activation/Relu"]): error: 'tf._FusedConv2D' op is neither a custom op nor a flex op
loc(fused["_FusedConv2D:", "StatefulPartitionedCall/model/batch_normalization_v1_1/FusedBatchNormV3"]): error: 'tf._FusedConv2D' op is neither a custom op nor a flex op

With this model, 2 fused operators cause issue for quantization and need to be split before conversion:
Conv2D + BiasAdd
Conv2D + BiasAdd + Relu
There is no fused MatMul op in this model, but I presume it would have needed to be split too.

Lib implementation currently supports multiple case of fused op split, for instance:
_FusedConv2D( BiasAdd, Prelu) -> Conv2D + BiasAdd + Prelu

Therefore, with a local hack of is_fused_conv2d()' and _split_fused_op()' functions, I unconditonally split the faulty ops:
_FusedConv2D( BiasAdd, Relu) -> Conv2D + BiasAdd + Relu
_FusedConv2D( BiasAdd) -> Conv2D + BiasAdd
It enabled conversion of the saved_model to a functional quantized TFLite model.

Would you consider adding a 'TFLite' compatibility mode to the lib, that would consistently split all fused operators present in the saved_model?
Such mode could be every useful for 2 use cases of TFLite conversions:

  1. conversion to float TFLite model with only TFLite builtin ops (without need for TF FlexOp delegate)
    corresponds to: converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
  2. conversion to quantized TFLite model
    corresponds to: converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]

Thank you for your support

@patlevin patlevin self-assigned this Mar 18, 2022
@patlevin patlevin added the enhancement New feature or request label Mar 18, 2022
@patlevin
Copy link
Owner

Thank you for the detailed description👍

I will add the feature over the weekend.

@patlevin patlevin added the model support Add support for certain models that don't work with the current version label Mar 23, 2022
@patlevin
Copy link
Owner

Fixed in version 1.6.0

@jvuillaumier
Copy link
Author

Great addition - it works fine for me.
Thank you very much @patlevin for your support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request model support Add support for certain models that don't work with the current version
Projects
None yet
Development

No branches or pull requests

2 participants