You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained a rotated bounding box model on my custom dataset and wish to use the model outside the docker container for my own application. The easiest way I have found to do this is by exporting the model to onnx and using the onnxruntime library to run inference which I have managed to do. The problem I am facing is that I am unable to make sense of the model output and cannot find any documentation to explain it. The model produces a list of nested lists where the outermost list is 10 long. Each list element contains varying sizes of sub lists as follows:
I therefore assume that list element 0 corresponds to 'score_1', element 1 corresponds to 'score_2' etc. On the assumption that this is correct, each 'score_X' contains a list of 27 elements where each element contains further list of lists of varying amounts, and each 'box_X' contains a list of 162 elements where each element had the same list of lists of the same lengths as its corresponding score. I assume the 27 length of the scores corresponds to the anchors. Looking in the infer.py file I again assume that the 6x size difference of the box outer list compared to the score outer list is due to the x1, y1, x2, y2, sin, and cos components of the box relative to each anchor.
Can you confirm if these assumptions are correct or not? And has nms been applied to these outputs or will I have to write an nms function? If so how would I do this?
Or is there a better way to run inference outside the container than my onnx approach?
The text was updated successfully, but these errors were encountered:
I have trained a rotated bounding box model on my custom dataset and wish to use the model outside the docker container for my own application. The easiest way I have found to do this is by exporting the model to onnx and using the onnxruntime library to run inference which I have managed to do. The problem I am facing is that I am unable to make sense of the model output and cannot find any documentation to explain it. The model produces a list of nested lists where the outermost list is 10 long. Each list element contains varying sizes of sub lists as follows:
0: 1, 27, 160, 160
1: 1, 27, 80, 80
2: 1, 27, 40, 40
3: 1, 27, 20, 20
4: 1, 27, 10, 10
5: 1, 162, 160, 160
6: 1, 162, 80, 80
7: 1, 162, 40, 40
8: 1, 162, 20, 20
9: 1, 162, 10, 10
Looking at the export method of the Model class the model outputs are as follows:
output_names = ['score_1', 'score_2', 'score_3', 'score_4', 'score_5', 'box_1', 'box_2', 'box_3', 'box_4', 'box_5']
I therefore assume that list element 0 corresponds to 'score_1', element 1 corresponds to 'score_2' etc. On the assumption that this is correct, each 'score_X' contains a list of 27 elements where each element contains further list of lists of varying amounts, and each 'box_X' contains a list of 162 elements where each element had the same list of lists of the same lengths as its corresponding score. I assume the 27 length of the scores corresponds to the anchors. Looking in the infer.py file I again assume that the 6x size difference of the box outer list compared to the score outer list is due to the x1, y1, x2, y2, sin, and cos components of the box relative to each anchor.
Can you confirm if these assumptions are correct or not? And has nms been applied to these outputs or will I have to write an nms function? If so how would I do this?
Or is there a better way to run inference outside the container than my onnx approach?
The text was updated successfully, but these errors were encountered: