-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] a bug about onnx aten::index_put. #13759
Comments
Please provide a reproducible script. Otherwise it is not clear what the issue is, so I'll close it. |
this is my demo extract from my network. @masahi
|
@liaojianjin Sorry to bother your. I have a problem with Indexput. It can't support dynamic shape. Do you have any ideas to solve this problem? |
@honghuichao Thanks for your report.
|
in the function _check_index(cls, indices, values) ,it will call infer_shape to get the indices size,but the indice size cannot inference when shape is dynamatic. my environment base pytorch == 1.8.0. and I think if torch.onnx.export base pytorch1.13.1, the aten::indexput cannot show in onnx graph,but maybe scatter_nd show in onnx graph. |
@honghuichao You can check the shape of index tensors before |
Thank you for your suggestion. If you don't have time to solve this problem, I want to try to solve it. |
You can find how to export index_put node in here @_onnx_symbolic("aten::index_put"). If your index is slice, it will be converted to tensor in ReshapeToAdvancedIndexingFormat. You can provide the demo onnx model to let me help you with this problem, I can't downgrade pytorch to 1.8 at the moment. |
@liaojianjin Thanks your warmth extremely! I constructed the following network structure using onnx api: from onnx import helper dummy_batch_src_corr_points = helper.make_tensor_value_info("dummy_batch_src_corr_points",TensorProto.FLOAT,["bscp_shape0","bscp_shape1"]) dummy_output = helper.make_tensor_value_info("output",TensorProto.FLOAT,["o_shape0","o_shape1"]) c4 = helper.make_node("Constant",[],["4"],value=helper.make_tensor("4",TensorProto.INT32,(1,),(0,))) |
@honghuichao You can try this commit first, I will create a PR for tvm. |
@liaojianjin thank you. But when I tested the whole network, I found that “cuda: an illegal memory access was encountered” |
@honghuichao Can you find out why it happened? Or the input and output of index_put. I only flatten the index tensor now. |
@liaojianjin But another case is failed in new code. from onnx import helper dummy_batch_src_corr_points = helper.make_tensor_value_info("dummy_batch_src_corr_points",TensorProto.FLOAT,["bscp_shape0","bscp_shape1"]) dummy_indices0 = helper.make_tensor_value_info("dummy_indices0",TensorProto.FLOAT,["i0_shape0"]) dummy_src_corr_points = helper.make_tensor_value_info("dummy_src_corr_points",TensorProto.FLOAT,["v_shape0",]) dummy_output = helper.make_tensor_value_info("output",TensorProto.FLOAT,["o_shape0","o_shape1"]) c4 = helper.make_node("Constant",[],["4"],value=helper.make_tensor("4",TensorProto.INT32,(2,),(-1,1))) onnx.checker.check_model(model_def) |
@honghuichao It may need to repeat the value of index1 to match the size of index0. But the size of index0 is unknown. |
@liaojianjin I think we can use "relay.shape_of " to get index0's shape . But I don't know how to repeat the value of index1. |
@honghuichao You can try |
The reason for this problem is the second input --indices cannot a dynamic shape tensor.
self._construct_nodes(graph)
File "/home/workspace/tvm/python/frontend/onnx.py", line 6567, in _construct_nodes
op = self._convert_operator(op_name, inputs, attr, self.opset)
File "/home/workspace/tvm/python/frontend/onnx.py", line 6686, in _convert_operator
sym = convert_map[op_name](inputs, attrs, self._params)
File "/home/workspace/tvm/python/frontend/onnx.py", line 4194, in _impl_v1
return cls._op_dispatch(operator, inputs, attr, params)
File "/home/workspace/tvm/python/frontend/onnx.py", line 4042, in _op_dispatch
return op_map[operator](inputs, attr, params)
File "/home/workspace/tvm/python/frontend/onnx.py", line 4140, in _index_put
indices, values = cls._check_index(inputs[1 : len(inputs) - 2], inputs[len(inputs) - 2])
File "/home/workspace/tvm/python/frontend/onnx.py", line 4134, in _check_index
return unfolding_indices(indices, values)
File "/home/workspace/tvm/python//frontend/onnx.py", line 4127, in unfolding_indices
_op.repeat(_op.tile(flatten_indices[i], (tile_size[i],)), repeat_size[i], 0)
File "/home/workspace/tvm/python/tvm/relay/op/transform.py", line 665, in repeat
return _make.repeat(data, repeats, axis)
File "/home/workspace/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in call
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (5) /home/workspace/tvm/build/libtvm.so(TVMFuncCall+0x63) [0x7f32be9d7123]
[bt] (4) /home/workspace/tvm/build/libtvm.so(tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::RelayExpr, int, int)>::AssignTypedLambda<tvm::RelayExpr ()(tvm::RelayExpr, int, int)>(tvm::RelayExpr ()(tvm::RelayExpr, int, int), std::cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)+0x1fe) [0x7f32be2fa1be]
[bt] (3) /home/workspace/tvm/build/libtvm.so(tvm::runtime::TVMMovableArgValueWithContext::operator int() const+0x28) [0x7f32bd0962b8]
[bt] (2) /home/workspace/tvm/build/libtvm.so(tvm::runtime::TVMPODValue::operator int() const+0x194) [0x7f32bcfc1944]
[bt] (1) /home/workspace/tvm/build/libtvm.so(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x45) [0x7f32bcc5815b]
[bt] (0) /home/workspace/tvm/build/libtvm.so(tvm::runtime::Backtraceabi:cxx11+0x22) [0x7f32be9f7892]
File "/home/workspace/tvm/include/tvm/runtime/packed_func.h", line 777
TVMError: In function relay.op._make.repeat(0: RelayExpr, 1: int, 2: int) -> RelayExpr: error while converting argument 1: [17:09:32] /home/workspace/tvm/include/tvm/runtime/packed_func.h:562:
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
Check failed: type_code_ == kDLInt (8 vs. 0) : expected int but got Object
The text was updated successfully, but these errors were encountered: