Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add rv1126 yolov3 support to sdk #1280

Merged
merged 26 commits into from
Nov 22, 2022
Merged

Conversation

AllentDan
Copy link
Member

After #1203 and #1238

@AllentDan AllentDan added the WIP label Nov 1, 2022
@AllentDan AllentDan changed the title [WIIP] rv1126 yolo sdk [WIIP] rv1126 yolov3 sdk Nov 1, 2022
@AllentDan AllentDan changed the title [WIIP] rv1126 yolov3 sdk [WIP] rv1126 yolov3 sdk Nov 1, 2022
@AllentDan AllentDan changed the title [WIP] rv1126 yolov3 sdk [WIP] Add rv1126 yolov3 && yolov5 support to sdk Nov 2, 2022
@AllentDan AllentDan changed the title [WIP] Add rv1126 yolov3 && yolov5 support to sdk Add rv1126 yolov3 && yolov5 support to sdk Nov 7, 2022
@AllentDan AllentDan requested a review from lvhan028 November 7, 2022 05:20
@AllentDan AllentDan removed the WIP label Nov 7, 2022
@lvhan028 lvhan028 requested a review from lzhangzz November 8, 2022 09:34
@AllentDan AllentDan changed the title Add rv1126 yolov3 && yolov5 support to sdk Add rv1126 yolov3 support to sdk Nov 10, 2022
@AllentDan
Copy link
Member Author

@lvhan028 @lzhangzz Any comments?

@lvhan028
Copy link
Collaborator

@lvhan028 @lzhangzz Any comments?

in test

@lvhan028
Copy link
Collaborator

I have to do serval modifications when converting yolov3 torch model to rknn-int8 model.

  • Disable normalize in yolov3 model's config by changing std=[255.0, 255.0, 255.0] to std=[1.0, 1.0, 1.0]
  • Enable do_quantization=True in rknn.py and set mean_value and std_value to [[0, 0, 0]] and [[255.0, 255.0, 255.0]] respectively
  • uncomment the partition_config of yolov3&yolox in detection_rknn_static-320x320.py

I don't think users are willing to buy it.

Let's try to eliminate the manual part.

  • Can we move the normalize config from torch model config to deploy config according to do_quantization?
  • How about automatically choosing the proper partition config according to target_platform and torch model config?

@AllentDan
Copy link
Member Author

AllentDan commented Nov 15, 2022

I have to do serval modifications when converting yolov3 torch model to rknn-int8 model.

  • Disable normalize in yolov3 model's config by changing std=[255.0, 255.0, 255.0] to std=[1.0, 1.0, 1.0]
  • Enable do_quantization=True in rknn.py and set mean_value and std_value to [[0, 0, 0]] and [[255.0, 255.0, 255.0]] respectively
  • uncomment the partition_config of yolov3&yolox in detection_rknn_static-320x320.py

I don't think users are willing to buy it.

Let's try to eliminate the manual part.

  • Can we move the normalize config from torch model config to deploy config according to do_quantization?
  • How about automatically choosing the proper partition config according to target_platform and torch model config?

Codes now automatically move the Normalization part for quantization to the rknn model.

Tried to make codes automatically tackle the partition part but failed. The expression was not elegant and it would destroy the rule built before.

@AllentDan AllentDan requested a review from grimoire November 15, 2022 07:53
csrc/mmdeploy/codebase/mmdet/yolo_head.h Outdated Show resolved Hide resolved
csrc/mmdeploy/codebase/mmdet/yolo_head.h Outdated Show resolved Hide resolved
csrc/mmdeploy/codebase/mmdet/yolo_head.h Outdated Show resolved Hide resolved
csrc/mmdeploy/codebase/mmdet/yolo_head.h Outdated Show resolved Hide resolved
csrc/mmdeploy/codebase/mmdet/yolo_head.h Outdated Show resolved Hide resolved
csrc/mmdeploy/codebase/mmdet/yolo_head.cpp Outdated Show resolved Hide resolved
csrc/mmdeploy/codebase/mmdet/yolo_head.cpp Outdated Show resolved Hide resolved
csrc/mmdeploy/codebase/mmdet/yolo_head.cpp Outdated Show resolved Hide resolved
@lvhan028
Copy link
Collaborator

How about deriving another task from class ObjectDetection for rknn?

@AllentDan
Copy link
Member Author

How about deriving another task from class ObjectDetection for rknn?

Why? Actually, I also used other backends with partition configs, which is convenient for debugging. These codes are more related to partition settings than just the RKNN backend.

@lvhan028
Copy link
Collaborator

How about deriving another task from class ObjectDetection for rknn?

Why? Actually, I also used other backends with partition configs, which is convenient for debugging. These codes are more related to partition settings than just the RKNN backend.

I found the API of BaseTask.create_input is changed to the following:

    @abstractmethod
    def create_input(self,
                     imgs: Union[str, np.ndarray, Sequence],
                     input_shape: Optional[Sequence[int]] = None,
                     backend: Optional[Backend] = None,
                     **kwargs) -> Tuple[Dict, torch.Tensor]:

Make backend as an argument of create_input kinda odd.
So I propose to derive another task to reimplement create_input instead of changing the base class API.
But I realized this proposal requires a derived task in every codebase if deploying it to rknn, which is bad.

@AllentDan
Copy link
Member Author

@grimoire Now, the process is moved to the base task. New arguments for create_input and process_model_cfg are avoided.

@lvhan028 lvhan028 merged commit 4dd4d48 into open-mmlab:master Nov 22, 2022
triple-Mu pushed a commit to triple-Mu/mmdeploy that referenced this pull request Dec 5, 2022
* add yolov3 head to SDK

* add yolov5 head to SDK

* fix export-info and lint, add reverse check

* fix lint

* fix export info for yolo heads

* add output_names to partition_config

* fix typo

* config

* normalize config

* fix

* refactor config

* fix lint and doc

* c++ form

* resolve comments

* fix CI

* fix CI

* fix CI

* float strides anchors

* refine pipeline of rknn-int8

* config

* rename func

* refactor

* rknn wrapper dict and fix typo

* rknn wrapper output update,  mmcls use end2end type

* fix typo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants