Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add script to export TorchScript model #211

Merged
merged 3 commits into from
Apr 13, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,10 +251,10 @@ NanoDet provide C++ and Android demo based on ncnn library.

To convert NanoDet pytorch model to ncnn, you can choose this way: pytorch->onnx->ncnn

To export onnx model, run `tools/export.py`.
To export onnx model, run `tools/export_onnx.py`.

```shell script
python tools/export.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
python tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
```

Then using [onnx-simplifier](https://github.com/daquexian/onnx-simplifier) to simplify onnx structure.
Expand Down
32 changes: 32 additions & 0 deletions demo_libtorch/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# NanoDet TorchScript / LibTorch Demo

This folder provides NanoDet inference code using for LibTorch.

## Install dependencies

This project needs OpenCV and CMake to work.

Install CMake using a package manager of your choice. For example, the following command will install CMake on Ubuntu:

```bash
sudo apt install cmake libopencv-dev
```

Also, you'll need to download LibTorch. Refer to [this page](https://pytorch.org/cppdocs/installing.html) for more info.

## Convert model

Export TorchScript model using `tools/export_torchscript.py`:

```shell
python ./tools/export_torchscript.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH} --input_shape ${MO}
```
## Build

### Linux
```shell
mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH=/absolute/path/to/libtorch ..
make
```
4 changes: 2 additions & 2 deletions demo_mnn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Please follow the [official document](https://www.yuque.com/mnn/en/build_linux)
1. Export ONNX model

```shell
python ./tools/export.py
python ./tools/export_onnx.py
```

2. Use *onnx-simplifier* to simplify it
Expand All @@ -39,7 +39,7 @@ Please follow the [official document](https://www.yuque.com/mnn/en/build_linux)
```

It should be note that the input size does not have to be 320, it can be any integer multiple of strides,
since NanoDet is anchor free. We can adapt the shape of `dummy_input` in *./tools/export.py* to get ONNX and MNN models
since NanoDet is anchor free. We can adapt the shape of `dummy_input` in *./tools/export_onnx.py* to get ONNX and MNN models
with different input sizes.

Here are converted model [Baidu Disk](https://pan.baidu.com/s/1DE4_yo0xez6Wd95xv7NnDQ)(extra code: *5mfa*),
Expand Down
2 changes: 1 addition & 1 deletion demo_openvino/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ source /opt/intel/openvino_2021/bin/setupvars.sh
1. Export ONNX model

```shell
python ./tools/export.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
python ./tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
```

2. Use *onnx-simplifier* to simplify it
Expand Down
File renamed without changes.
70 changes: 70 additions & 0 deletions tools/export_torchscript.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import os
import argparse
import torch
from nanodet.model.arch import build_model
from nanodet.util import Logger, cfg, load_config, load_model_weight


def main(config, model_path: str, output_path: str, input_shape=(320, 320)):
logger = Logger(local_rank=-1, save_dir=config.save_dir, use_tensorboard=False)

# Create model and load weights
model = build_model(config.model)
checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage)
load_model_weight(model, checkpoint, logger)

# Convert backbone weights for RepVGG models
if config.model.arch.backbone.name == 'RepVGG':
deploy_config = config.model
deploy_config.arch.backbone.update({'deploy': True})
deploy_model = build_model(deploy_config)
from nanodet.model.backbone.repvgg import repvgg_det_model_convert
model = repvgg_det_model_convert(model, deploy_model)

# TorchScript: tracing the model with dummy inputs
with torch.no_grad():
dummy_input = torch.zeros(1, 3, input_shape[0], input_shape[1]) # Batch size = 1
model.eval().cpu()
model_traced = torch.jit.trace(model, example_inputs=dummy_input).eval()
model_traced.save(output_path)
print('Finished export to TorchScript')


def parse_args():
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description='Convert .pth model weights to TorchScript.')
parser.add_argument('--cfg_path',
type=str,
help='Path to .yml config file.')
parser.add_argument('--model_path',
type=str,
default=None,
help='Path to .ckpt model.')
parser.add_argument('--out_path',
type=str,
default='nanodet.torchscript.pth',
help='TorchScript model output path.')
parser.add_argument('--input_shape',
type=str,
default=None,
help='Model input shape.')
return parser.parse_args()


if __name__ == '__main__':
args = parse_args()
cfg_path = args.cfg_path
model_path = args.model_path
out_path = args.out_path
input_shape = args.input_shape
load_config(cfg, cfg_path)
if input_shape is None:
input_shape = cfg.data.train.input_size
else:
input_shape = tuple(map(int, input_shape.split(',')))
assert len(input_shape) == 2
if model_path is None:
model_path = os.path.join(cfg.save_dir, "model_best/model_best.ckpt")
main(cfg, model_path, out_path, input_shape)
print("Model saved to:", out_path)