Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupported Ops in the model before optimization TensorScatterUpdate #4222

Closed
waittim opened this issue Nov 12, 2020 · 13 comments
Closed

Unsupported Ops in the model before optimization TensorScatterUpdate #4222

waittim opened this issue Nov 12, 2020 · 13 comments

Comments

@waittim
Copy link

waittim commented Nov 12, 2020

System information

  • TensorFlow.js version (you are using): 2.7.0

Describe the feature and the current behavior/state.
TensorScatterUpdate is not supported.

2020-11-11 18:40:06.492416: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-11 18:40:06.505295: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f9a9af053e0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-11 18:40:06.505339: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-11-11 18:40:09.047157: I tensorflow/core/grappler/devices.cc:78] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
2020-11-11 18:40:09.047227: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-11-11 18:40:10.009017: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816] Optimization results for grappler item: graph_to_optimize
2020-11-11 18:40:10.009041: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818]   function_optimizer: Graph size after: 11186 nodes (10781), 18356 edges (17949), time = 611.069ms.
2020-11-11 18:40:10.009047: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818]   function_optimizer: function_optimizer did nothing. time = 27.527ms.
Traceback (most recent call last):
  File "/Users/waittim/anaconda3/envs/tfjs_convert/bin/tensorflowjs_converter", line 8, in <module>
    sys.exit(pip_main())
  File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 757, in pip_main
    main([' '.join(sys.argv[1:])])
  File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 761, in main
    convert(argv[0].split(' '))
  File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/converter.py", line 699, in convert
    experiments=args.experiments)
  File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 629, in convert_tf_saved_model
    initializer_graph=frozen_initializer_graph)
  File "/Users/waittim/anaconda3/envs/tfjs_convert/lib/python3.6/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 146, in optimize_graph
    ', '.join(unsupported))
ValueError: Unsupported Ops in the model before optimization
TensorScatterUpdate

Any Other info.
The SavedModel is converted from the onnx model(opset_version=11). You can find the model I used here. The output node names are 'StatefulPartitionedCall,StatefulPartitionedCall_1,StatefulPartitionedCall_2'. It's a yolo model.
May I know is there any other possible solution besides waiting for the support? Thank you!

@waittim waittim added the type:feature New feature or request label Nov 12, 2020
@rthadur
Copy link
Contributor

rthadur commented Nov 12, 2020

cc @annxingyuan

@PeterL1n
Copy link

Hope this implemented soon as well!

@pyu10055
Copy link
Collaborator

@PeterL1n Since this Op is very similar to ScatterNd op, we should be able to add the support fairly soon.

@waittim
Copy link
Author

waittim commented Nov 17, 2020

Thank you so much! I'm waiting for it!

@Jacobsolawetz
Copy link

Any updates on this thread?

@mirmohammad
Copy link

Is there a way to know what Pytorch operation is converted into TensorScatterUpdate to avoid it?

@not-william
Copy link

Hope this is implemented!

@Cortexelus
Copy link

Hope it gets implemented!

@Cortexelus
Copy link

Cortexelus commented May 22, 2022

@PeterL1n Since this Op is very similar to ScatterNd op, we should be able to add the support fairly soon.

TensorScatterUpdate from the tensorflow docs:
"This operation is very similar to tf.scatter_nd, except that the updates are scattered onto an existing tensor (as opposed to a zero-tensor). If the memory for the existing tensor cannot be re-used, a copy is made and updated."

tf.raw_ops.TensorScatterUpdate(
    tensor, indices, updates, name=None
)
tf.scatter_nd(
    indices, updates, shape, name=None
)

A way to re-write TensorScatterUpdate in terms of ScatterNd could be something like:

def TensorScatterUpdate( tensor, indices, updates ):
     # zero the indices we want to update
    return tensor * ScatterNd(indices, ZerosLike(updates), tensor.shape)  
    # then add in the updates
    + ScatterNd(indices, updates, tensor.shape) 

But before TFJS officially supports that, you could export your model with tensorflowjs_converter using --skip_op_check flag, then implement + register TensorScatterUpdate as a customOp

Doing this freehand, untested, but.. registering your custom op in javascript might be like:

const customTensorScatterUpdate = function(node){
   const tensor = node.inputs[0];
   const indices = node.inputs[1];
   const updates = node.inputs[2]
   const zeros = tf.zerosLike(updates)
   return tensor * tf.scatterND(indices, zeros, tensor.shape) + tf.scatterND(indices, updates, tensor.shape)
}
tf.registerOp('TensorScatterUpdate', customTensorScatterUpdate);

suspect this implementation would be slower, as it calls scatterND twice. I imagine a faster implementation would just edit the tensor's memory without making copies.

@FabioRomagnolo
Copy link

FabioRomagnolo commented Jun 10, 2022

@PeterL1n Since this Op is very similar to ScatterNd op, we should be able to add the support fairly soon.

TensorScatterUpdate from the tensorflow docs: "This operation is very similar to tf.scatter_nd, except that the updates are scattered onto an existing tensor (as opposed to a zero-tensor). If the memory for the existing tensor cannot be re-used, a copy is made and updated."

tf.raw_ops.TensorScatterUpdate(
    tensor, indices, updates, name=None
)
tf.scatter_nd(
    indices, updates, shape, name=None
)

A way to re-write TensorScatterUpdate in terms of ScatterNd could be something like:

def TensorScatterUpdate( tensor, indices, updates ):
     # zero the indices we want to update
    return tensor * ScatterNd(indices, ZerosLike(updates), tensor.shape)  
    # then add in the updates
    + ScatterNd(indices, updates, tensor.shape) 

But before TFJS officially supports that, you could export your model with tensorflowjs_converter using --skip_op_check flag, then implement + register TensorScatterUpdate as a customOp

Doing this freehand, untested, but.. registering your custom op in javascript might be like:

const customTensorScatterUpdate = function(node){
   const tensor = node.inputs[0];
   const indices = node.inputs[1];
   const updates = node.inputs[2]
   const zeros = tf.zerosLike(updates)
   return tensor * tf.scatterND(indices, zeros, tensor.shape) + tf.scatterND(indices, updates, tensor.shape)
}
tf.registerOp('TensorScatterUpdate', customTensorScatterUpdate);

suspect this implementation would be slower, as it calls scatterND twice. I imagine a faster implementation would just edit the tensor's memory without making copies.

For those having problems with this code here's the fix! I actually tested it and it works.

const customTensorScatterUpdate = function(node){
       const tensor = node.inputs[0];
       const indices = node.inputs[1];
       const updates = node.inputs[2]
       const zeros = tf.zerosLike(updates)
       const a =  tf.mul(tensor, tf.scatterND(indices, zeros, tensor.shape));
       const b = tf.scatterND(indices, updates, tensor.shape);
       return a.add(b)
    }
tf.registerOp('TensorScatterUpdate', customTensorScatterUpdate);

@gaikwadrahul8
Copy link
Contributor

gaikwadrahul8 commented Apr 26, 2023

Hi, @waittim

Apologize for the delayed response and I see this PR #7189 got merged so it seems like this issue has been taken care by that PR and also we have updated official documentation for tf.tensorScatterUpdate so Could you please confirm if this issue is resolved for you ? Please feel free to close the issue if it is resolved ? Thank you!

@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you.

@google-ml-butler
Copy link

Closing as stale. Please @mention us if this needs more attention.

@gaikwadrahul8 gaikwadrahul8 self-assigned this May 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants