-
-
Notifications
You must be signed in to change notification settings - Fork 39
Conversion error: Shape must be rank 1 but is rank 2 #91
Comments
Start the survey. By the way, I only look at the structure of the model and ask questions. Is this YOLOv5-Lite? |
@PINTO0309 Yes, but is modified |
Thanks for your help. It would be better if you could include the architecture of the model if possible, so that other engineers can easily find the issue when they search for it. Either way, I'll get to work. Please wait a moment. |
@PINTO0309 Okay. |
Since all The other point is that the last 5D
{
"format_version": 2,
"layers": [
{
"layer_id": "316",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "320",
"type": "Const",
"replace_mode": "direct",
"values": [
3
]
},
{
"layer_id": "321",
"type": "Const",
"replace_mode": "direct",
"values": [
0
]
},
{
"layer_id": "322",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "377",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "383",
"type": "Const",
"replace_mode": "direct",
"values": [
1
]
},
{
"layer_id": "384",
"type": "Const",
"replace_mode": "direct",
"values": [
0
]
},
{
"layer_id": "385",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "389",
"type": "Const",
"replace_mode": "direct",
"values": [
2
]
},
{
"layer_id": "390",
"type": "Const",
"replace_mode": "direct",
"values": [
0
]
},
{
"layer_id": "391",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "394",
"type": "Const",
"replace_mode": "direct",
"values": [
1,
16,
16,
3,
9
]
},
{
"layer_id": "396",
"type": "Const",
"replace_mode": "direct",
"values": [
0,
3,
1,
2,
4
]
},
{
"layer_id": "432",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "438",
"type": "Const",
"replace_mode": "direct",
"values": [
1
]
},
{
"layer_id": "439",
"type": "Const",
"replace_mode": "direct",
"values": [
0
]
},
{
"layer_id": "440",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "444",
"type": "Const",
"replace_mode": "direct",
"values": [
2
]
},
{
"layer_id": "445",
"type": "Const",
"replace_mode": "direct",
"values": [
0
]
},
{
"layer_id": "446",
"type": "Squeeze",
"replace_mode": "insert_after",
"values": [
0
]
},
{
"layer_id": "449",
"type": "Const",
"replace_mode": "direct",
"values": [
1,
32,
32,
3,
9
]
},
{
"layer_id": "451",
"type": "Const",
"replace_mode": "direct",
"values": [
0,
3,
1,
2,
4
]
}
]
}
I have not checked that the model works correctly at all. Please check for yourself and if there are any problems with the structure you can look into it yourself. |
@PINTO0309 Wonderful. Thanks for your support. I noted that shape="" of layer id 375, 376, 430, 431, 455, 480 were not replaced. More so, how do i identify values to be 0 or 1 or 2... with reference to json below and can you please recommend the software to view .bin? {
"layer_id": "320",
"type": "Const",
"replace_mode": "direct",
"values": [
3
]
},
{
"layer_id": "321",
"type": "Const",
"replace_mode": "direct",
"values": [
0 |
docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
ghcr.io/pinto0309/openvino2tensorflow:latest
python3 -m onnxsim lite.onnx lite.onnx
$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model lite.onnx \
--data_type FP32
{
"format_version": 2,
"layers": [
{
"layer_id": "358",
"type": "Const",
"replace_mode": "direct",
"values": [
0,
3,
1,
2,
4
]
},
{
"layer_id": "393",
"type": "Const",
"replace_mode": "direct",
"values": [
0,
3,
1,
2,
4
]
}
]
} openvino2tensorflow \
--model_path lite.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite \
--weight_replacement_config replace.json
import onnxruntime
import tensorflow as tf
import time
import numpy as np
from pprint import pprint
H=512
W=512
MODEL='model_float32'
############################################################
onnx_session = onnxruntime.InferenceSession('lite.onnx')
input_name = onnx_session.get_inputs()[0].name
output_name = onnx_session.get_outputs()[0].name
roop = 1
e = 0.0
result = None
inp = np.ones((1,3,H,W), dtype=np.float32)
for _ in range(roop):
s = time.time()
result = onnx_session.run(
[output_name],
{input_name: inp}
)
e += (time.time() - s)
print('ONNX output @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@')
print(f'elapsed time: {e/roop*1000}ms')
print(f'shape: {result[0].shape}')
pprint(result)
############################################################
interpreter = tf.lite.Interpreter(model_path=f'{MODEL}.tflite', num_threads=4)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
roop = 1
e = 0.0
result = None
inp = np.ones((1,H,W,3), dtype=np.float32)
for _ in range(roop):
s = time.time()
interpreter.set_tensor(input_details[0]['index'], inp)
interpreter.invoke()
result = interpreter.get_tensor(output_details[1]['index'])
e += (time.time() - s)
print('tflite output @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@')
print(f'elapsed time: {e/roop*1000}ms')
print(f'shape: {result.shape}')
pprint(result) python3 onnx_tflite_test.py
|
@PINTO0309 the process was successful with equality, but the obtained detection result on detection.py is still wrong. similar to my earlier image post. |
Please attach the code for inference testing. The exchange of correspondence will be inefficient. |
https://github.com/zldrobit/yolov5/blob/tf-android/detect.py Above as testing. I am also trying to run it on Android. |
Can you provide me with one still image for testing? I have no good way of knowing what your model is inferring. And is the model you used Float32? Is it INT8? Float16? Please describe the information in detail. |
the model was trained on four classes |
Do you have any test code for onnx that you were able to successfully infer? |
The 3D of output1 is okay. but the 5D of output2 and output3 doesn't align with the original yolov5.tflite. For example, output2 is [1,16,16,3,9], while one of yolov5 is [1,256,3,9]. the split [16,16] and [32,32] in my tflite is multiplied in the yolov5.tflite of [256] and [1024]. |
I'll say it again. The .onnx that I extracted by unzipping the zip file you attached in your first comment is already 5D. You keep pointing to the .tflite file, but it's not the .tflite that's the problem. The problem is with your model. Check the ONNX file first. |
Alright... is there a way to solve this problem? Because the tested onnx is working fine compared to the tflite... |
First, I have not seen the structure of the YOLOv5-Lite model. So I do not know if any final structure is correct.
However, somewhere in the comments on this issue, it doesn't say how you modified the model. It is obvious that you are already in 5D when you export from PyTorch to ONNX, and all I can say at this point is that you made some mistake when you modified the PyTorch model. If 4D is correct, please provide the 4D formatted ONNX first, because I don't know the correct structure after Transpose. Note that a discussion of how to correctly export YOLOv5-Lite models from PyTorch to ONNX is beyond the scope of this repository. |
yes.. correct.. the conversion was successful. I am also confused too about the tested detection results |
I assure you. It's not a problem with my tools. The ONNX that you generated from PyTorch has three outputs. Does the official model also have three outputs? If the output of the official model is one instead of three, then there is some mistake at the time you first generate ONNX.
I'm afraid that's probably not correct. First, please consult the experts in the yolov5-lite repository. Then, when you are able to generate ONNX with the correct structure, please come back to this repository. |
Above the model generated from customized dataset using original YOLOv5-Lite. I actually used two scale of detection instead of three scale used by the original yolov5. |
lite4D.zip |
I apologize for the inconvenience, but can you issue a separate issue so that the various discussions don't get mixed up? |
Issue Type
Support, Others
OS
Windows
OS architecture
x86_64
Programming Language
Python
Framework
PyTorch
Download URL for ONNX / OpenVINO IR
model.zip
Convert Script
Description
@PINTO0309 Thanks for your good work.
I obtained the error output log in trying to convert from .xml to saved_model, pb, tflite. Also tried to used replace.json for layer_id 325, but all effort was fruitless. How do i solve this problem?
Relevant Log Output
Source code for simple inference testing code
model.zip
The text was updated successfully, but these errors were encountered: