Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolox inference #114

Closed
tucachmo2202 opened this issue Jul 29, 2021 · 11 comments
Closed

Yolox inference #114

tucachmo2202 opened this issue Jul 29, 2021 · 11 comments

Comments

@tucachmo2202
Copy link

Hi, Thanks for your great works!
I have downloaded tensorflow lite Yolox model converted by you. I also wrote code inference for tensorflow lite model follow this file, just change load model, change input shape with tensorflow lite format. It run but return none object...
Could help me fix this or do you plan to complete inference code for yolox?
Thank you very much!

@ghost
Copy link

ghost commented Jul 29, 2021

Can you provide converted .tflite file?

@tucachmo2202
Copy link
Author

@DonkeySmall You can go to go to this link then sh download_nano.sh to download model. I also up the float 32 model to google drive.

@ghost
Copy link

ghost commented Jul 29, 2021

The original output is 1x8400x85

image_000000

but your tflite model output is 1x2100x85

image_000001
image_000002
maybe this is the problem?

@tucachmo2202
Copy link
Author

@DonkeySmall,
Origin model using input 416x416 so its output is larger than tflite with 320x320 input. However, I think that is not a problem. I guess something difference in preprocess I have to change not only reshape input image.

@PINTO0309
Copy link
Owner

PINTO0309/openvino2tensorflow#48

@tucachmo2202
Copy link
Author

@PINTO0309,
Hope you fix it soon!
Best regard!

@PINTO0309
Copy link
Owner

PINTO0309 commented Jul 31, 2021

Fixes: PINTO0309/openvino2tensorflow#48

https://twitter.com/Nextremer_nb_o/status/1421461443501649920?s=20
_4yDlmCVowFFXaMw

@tucachmo2202
Copy link
Author

Hi @PINTO0309,
Glad to hear you fix this bug. However, when I convert model from yolox_nano_416x416.onnx with your script, I met an error

ValueError: Dimension 1 in both shapes must be equal, but are 676 and 169. Shapes are [1,676] and [1,169]. for '{{node tf.concat_17/concat}} = ConcatV2[N=3, T=DT_FLOAT, Tidx=DT_INT32](Placeholder, Placeholder_1, Placeholder_2, tf.concat_17/concat/axis)' with input shapes: [1,2704,85], [1,676,85], [1,169,85], [] and with computed input tensors: input[3] = <2>.

I installed openvino with docker. Could you please help me? Thank you very much!

@PINTO0309
Copy link
Owner

PINTO0309 commented Aug 1, 2021

For example,

$ rm yolox_nano_320x320.xml
$ cp yolox_nano_320x320_tf.xml yolox_nano_320x320.xml

Re-Run.

@tucachmo2202
Copy link
Author

tucachmo2202 commented Aug 1, 2021

I don't really understand you. I can't find any file yolox_nano_320x320_tf.xml or 'yolox_nano_416x416_tf.xml' in my case. My all script is:

xhost +local: && \
docker run -it --rm \
-v `pwd`:/home/user/workdir \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
pinto0309/openvino2tensorflow:latest

MODEL=yolox_nano
H=416
W=416
$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model ${MODEL}_${H}x${W}.onnx \
--data_type FP32 \
--output_dir openvino/${MODEL}/${H}x${W}/FP32
$INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo.py \
--input_model ${MODEL}_${H}x${W}.onnx \
--data_type FP16 \
--output_dir openvino/${MODEL}/${H}x${W}/FP16
mkdir -p openvino/${MODEL}/${H}x${W}/myriad
${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64/myriad_compile \
-m openvino/${MODEL}/${H}x${W}/FP16/${MODEL}_${H}x${W}.xml \
-ip U8 \
-VPU_NUMBER_OF_SHAVES 4 \
-VPU_NUMBER_OF_CMX_SLICES 4 \
-o openvino/${MODEL}/${H}x${W}/myriad/${MODEL}_${H}x${W}.blob

openvino2tensorflow \
--model_path openvino/${MODEL}/${H}x${W}/FP32/${MODEL}_${H}x${W}.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite \
--output_weight_quant_tflite \
--output_float16_quant_tflite \
--output_integer_quant_tflite \
--string_formulas_for_normalization 'data / 255' \
--output_integer_quant_type 'uint8' \
--output_tfjs \
--output_coreml \
--weight_replacement_config weight_replacement_config_${MODEL}.json
mv saved_model saved_model_${MODEL}_${H}x${W}

Could you help me to fix my script?
Thank you!

@PINTO0309
Copy link
Owner

#126

Repository owner locked as resolved and limited conversation to collaborators Aug 5, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants