You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I have been working to try to export an onnx classification model as fp16 but have been running into issues as certain layers are not converting to fp16.
This is the error I am running into when the line above is run:
Traceback (most recent call last):
File "onnx_intern_image_test.py", line 119, in <module>
inference_sess = ort.InferenceSession(onnx_file, providers=['CUDAExecutionProvider'], core_op='DCNv3_pytorch', sess_options=ort.SessionOptions())
File "~*******/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "~*****/miniconda3/envs/internimage/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 397, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ~*******InternImage/classification/intern_image_b_1k_224_half.onnx failed:Type Error: Type parameter (T) of Optype
(Div) bound to different types (tensor(float) and tensor(float16) in node (Div_209).
As you can see there are different types for the layer "Div_209".
Please let me know ASAP if there are any fixes or any known ways to convert the classification model to fp16 for onnx.
Thanks in advanced!
The text was updated successfully, but these errors were encountered:
Hello,
I have been working to try to export an onnx classification model as fp16 but have been running into issues as certain layers are not converting to fp16.
Here are the steps I have taken so far:
1. I have followed issue: #245
Changed the files (dcvn3.py, and dcnv3_func.py) to force dtype=torch.float16 rather than torch.float.
2. I have also forced the model to be
.half()
in the export.py file in the classification folder.Here is an updated version of what I changed the torch2onnx function:
While the export works fine the later steps for testing the onnx model do not work due to layer issues, as described in later sections.
3. Changed the core_op in the corresponding yaml file to core_op: 'DCNv3_pytorch' as shown below:
File: classification/configs/internimage_b_1k_224.yaml
4. Convert Onnx file to Inference Session via onnxruntime:
This is the error I am running into when the line above is run:
As you can see there are different types for the layer "Div_209".
Please let me know ASAP if there are any fixes or any known ways to convert the classification model to fp16 for onnx.
Thanks in advanced!
The text was updated successfully, but these errors were encountered: