-
Notifications
You must be signed in to change notification settings - Fork 648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TensorRT segmentation probabilities scores #887
Comments
@user41pp Hi, exporting to onnx with score map(without argmax) is OK and easy to implement, but setting thresholds for each class at runtime is a little confused. Maybe you could provide sample code or PR to show your thoughts. |
Hi, sorry for the late reply. In the simplest form, I am asking for exactly that - adding the option (or making it the default) to remove the argmax layer and instead directly output the softmax values. Here is a screenshot of the last layers of a UNET model (configs and commands listed further below) with that change: This would solve our problem already and if that's easy, then this is the way to go. The "dynamic threshold at runtime" part does not have to be done by mmdeploy. However, outputting the softmax layer would increase the output tensor by the number of classes the model has (e.g. current output shape is 1024 x 512, and if we would output the softmax values, output shape would be 1024 x 512 x num_classes). Because of the potentially huge amount of classes maybe it makes sense to not output all the scores but instead only the topk scores and correspondig class indices. For the most common case k=1 we would output the top softmax score along with its class index. The returned tensor then would be a tuple containing (argmax, softmax[argmax]). MMdeploy config: MMseg config: MMseg model checkpoint: Deploy command:
|
We use TensorRT like shown here https://github.com/open-mmlab/mmdeploy/blob/master/demo/csrc/image_segmentation.cpp Specifically, we use an export config similar to the TensorRT export config linked above (segmentation_tensorrt-fp16_dynamic-512x1024-2048x2048.py; the end2end.onnx file is an intermediate artifact which is used while creating the TensorRT engine) |
@user41pp Hi, do you have time to PR us this feature? Seems not so complicated. It includes exporting onnx without |
Hi, Is this problem solved? |
@yuanyuangoo Hi, have given the solution, not sure if it's solved the problem. |
@RunningLeon [2022-11-21 18:20:59.885] [mmdeploy] [info] [inference.cpp:50] ["img"] <- ["img"] Which is a check you guys implement here: |
@kacifer999 hi, maybe this #1379 is what you want. |
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now. |
Describe the feature
MMSegmentation TensorRT deployment: output float probabilities/scores instead of int class labels.
In segmentor.h we have
int* mask; ///< segmentation mask of the input image, in which mask[i * width + j] indicates the label id of pixel at (i, j)
Request: change mask to be a 3d matrix of probabilities of shape (width, height, num_classes) with float probabilities for all classes.
This would require modifications in the export pipeline as well.
Motivation
Related resources
https://github.com/open-mmlab/mmdeploy/blob/master/csrc/mmdeploy/apis/c/mmdeploy/segmentor.h
The text was updated successfully, but these errors were encountered: