Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] NMS update #957

Merged
merged 31 commits into from
May 31, 2021
Merged
Changes from 1 commit
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
8b1f267
Add score_threshold and max_num to NMS
SemyonBevzuk Apr 15, 2021
925fae1
Fix codestyle
SemyonBevzuk Apr 15, 2021
3cee61c
Merge branch 'master' into nms_update
SemyonBevzuk Apr 16, 2021
fd8b082
Fix codestyle
SemyonBevzuk Apr 16, 2021
a6eae23
Fix inds in nms
SemyonBevzuk Apr 21, 2021
ae469a7
Update nms docstring
SemyonBevzuk Apr 21, 2021
23a465b
Move score_threshold and max_num arguments
SemyonBevzuk Apr 21, 2021
61c7286
Fix args order in docstring
SemyonBevzuk Apr 21, 2021
de02886
Merge branch 'master' into nms_update
RunningLeon Apr 22, 2021
8b4e7de
fix lint of c++ file
RunningLeon Apr 22, 2021
a9e45d6
Merge remote-tracking branch 'upstream/master' into nms_update
SemyonBevzuk Apr 22, 2021
5c083b8
Remove torch.onnx.is_in_onnx_export() and add max_num to batched_nms …
SemyonBevzuk Apr 23, 2021
79e6941
Rewrote max_num handling in NMSop.symbolic
SemyonBevzuk Apr 26, 2021
f8b1a7b
Added processing max_output_boxes_per_class when exporting to TensorRT
SemyonBevzuk Apr 27, 2021
9dba3c5
Added score_threshold and max_num for NMS in test_onnx.py and test_te…
SemyonBevzuk Apr 27, 2021
1355f5c
Remove _is_value(max_num)
SemyonBevzuk Apr 30, 2021
bc7fbb7
Merge remote-tracking branch 'upstream/master' into nms_update
SemyonBevzuk Apr 30, 2021
d2702b8
fix ci errors with torch==1.3.1
RunningLeon May 6, 2021
dc93be1
Update test_batched_nms in test_nms.py
SemyonBevzuk May 7, 2021
95f1bbb
Added tests for preprocess_onnx
SemyonBevzuk May 14, 2021
6d415c9
Fix
SemyonBevzuk May 14, 2021
9447eb6
Moved 'test_tensorrt_preprocess.py' and 'preprocess', updated 'remove…
SemyonBevzuk May 14, 2021
e735646
Update mmcv/tensorrt/__init__.py
SemyonBevzuk May 17, 2021
9d9c169
Fix segfault torch==1.3.1 (remove onnx.checker.check_model)
SemyonBevzuk May 18, 2021
8b03816
Returned 'onnx.checker.check_model' with torch version check
SemyonBevzuk May 19, 2021
5cd8a54
Changed torch version from 1.3.1 to 1.4.0
SemyonBevzuk May 19, 2021
5f74a4c
update version check
RunningLeon May 21, 2021
7ba082e
remove check for onnx
RunningLeon May 21, 2021
c9ac555
merge master and fix conflicts
RunningLeon May 24, 2021
f9fc45e
Merge branch 'nms_update' of github.com:SemyonBevzuk/mmcv into nms_up…
RunningLeon May 24, 2021
85576ea
merge master and resolve conflicts
RunningLeon May 25, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 14 additions & 11 deletions mmcv/ops/nms.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,21 +14,24 @@
class NMSop(torch.autograd.Function):

@staticmethod
def forward(ctx, bboxes, scores, iou_threshold, score_threshold, max_num,
offset):
valid_mask = scores > score_threshold
bboxes, scores = bboxes[valid_mask], scores[valid_mask]
valid_inds = torch.nonzero(valid_mask, as_tuple=False).squeeze(dim=1)
def forward(ctx, bboxes, scores, iou_threshold, offset, score_threshold,
max_num):
if torch.onnx.is_in_onnx_export():
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
valid_mask = scores > score_threshold
bboxes, scores = bboxes[valid_mask], scores[valid_mask]
valid_inds = torch.nonzero(
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
valid_mask, as_tuple=False).squeeze(dim=1)

inds = ext_module.nms(
bboxes, scores, iou_threshold=float(iou_threshold), offset=offset)

inds = valid_inds[inds[:max_num]]
if torch.onnx.is_in_onnx_export():
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
inds = valid_inds[inds[:max_num]]
return inds

@staticmethod
def symbolic(g, bboxes, scores, iou_threshold, score_threshold, max_num,
offset):
def symbolic(g, bboxes, scores, iou_threshold, offset, score_threshold,
max_num):
from ..onnx import is_custom_op_loaded
has_custom_op = is_custom_op_loaded()
# TensorRT nms plugin is aligned with original nms in ONNXRuntime
Expand Down Expand Up @@ -105,7 +108,7 @@ def symbolic(g, boxes, scores, iou_threshold, sigma, min_score, method,


@deprecated_api_warning({'iou_thr': 'iou_threshold'})
def nms(boxes, scores, iou_threshold, score_threshold=0, max_num=-1, offset=0):
def nms(boxes, scores, iou_threshold, offset=0, score_threshold=0, max_num=-1):
"""Dispatch to either CPU or GPU NMS implementations.

The input can be either torch tensor or numpy array. GPU NMS will be used
Expand Down Expand Up @@ -160,8 +163,8 @@ def nms(boxes, scores, iou_threshold, score_threshold=0, max_num=-1, offset=0):
else:
if max_num < 0:
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
max_num = boxes.shape[0]
inds = NMSop.apply(boxes, scores, iou_threshold, score_threshold,
max_num, offset)
inds = NMSop.apply(boxes, scores, iou_threshold, offset,
score_threshold, max_num)
dets = torch.cat((boxes[inds], scores[inds].reshape(-1, 1)), dim=1)
if is_numpy:
dets = dets.cpu().numpy()
Expand Down