Skip to content

Commit

Permalink
unify onnx examples prepare model scripts (#1187)
Browse files Browse the repository at this point in the history
Signed-off-by: Sun, Xuehao <[email protected]>
  • Loading branch information
XuehaoSun authored Sep 13, 2023
1 parent 06cc382 commit 5ecb134
Show file tree
Hide file tree
Showing 143 changed files with 4,156 additions and 1,540 deletions.
Original file line number Diff line number Diff line change
@@ -1,35 +1,26 @@
Step-by-Step
============
# Step-by-Step

This example load a face recognition model from [ONNX Model Zoo](https://github.com/onnx/models) and confirm its accuracy and speed based on [Refined MS-Celeb-1M](https://s3.amazonaws.com/onnx-model-zoo/arcface/dataset/faces_ms1m_112x112.zip).

# Prerequisite

## 1. Environment

```shell
pip install neural-compressor
pip install -r requirements.txt
```

> Note: Validated ONNX Runtime [Version](/docs/source/installation_guide.md#validated-software-environment).
## 2. Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).

```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/arcface/model/arcfaceresnet100-8.onnx
```

Convert opset version to 11 for more quantization capability.

```python
import onnx
from onnx import version_converter
model = onnx.load('arcfaceresnet100-8.onnx')
model = version_converter.convert_version(model, 11)
onnx.save_model(model, 'arcfaceresnet100-11.onnx')
python prepare_model.py --input_model='arcfaceresnet100-8.onnx' --output_model='arcfaceresnet100-11.onnx'
```

## 3. Prepare Dataset

Download dataset [Refined MS-Celeb-1M](https://s3.amazonaws.com/onnx-model-zoo/arcface/dataset/faces_ms1m_112x112.zip).

# Run
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import argparse
import os
import sys
from urllib import request

import onnx
from onnx import version_converter

MODEL_URL = "https://github.com/onnx/models/raw/main/vision/body_analysis/arcface/model/arcfaceresnet100-8.onnx"
MAX_TIMES_RETRY_DOWNLOAD = 5


def parse_arguments():
parser = argparse.ArgumentParser()
parser.add_argument("--input_model", type=str, required=False, default='arcfaceresnet100-8.onnx')
parser.add_argument("--output_model", type=str, required=True)
return parser.parse_args()


def progressbar(cur, total=100):
percent = '{:.2%}'.format(cur / total)
sys.stdout.write("\r[%-100s] %s" % ('#' * int(cur), percent))
sys.stdout.flush()


def schedule(blocknum, blocksize, totalsize):
if totalsize == 0:
percent = 0
else:
percent = min(1.0, blocknum * blocksize / totalsize) * 100
progressbar(percent)


def download_model(url, model_name, retry_times=5):
if os.path.isfile(model_name):
print(f"{model_name} exists, skip download")
return True

print("download model...")
retries = 0
while retries < retry_times:
try:
request.urlretrieve(url, model_name, schedule)
break
except KeyboardInterrupt:
return False
except:
retries += 1
print(f"Download failed{', Retry downloading...' if retries < retry_times else '!'}")
return retries < retry_times


def export_model(input_model, output_model):
# Convert opset version to 14 for more quantization capability.
print("\nexport model...")
model = onnx.load(input_model)
model = version_converter.convert_version(model, 14)
onnx.save_model(model, output_model)
assert os.path.exists(output_model), f"Export failed! {output_model} doesn't exist!"


def prepare_model(input_model, output_model):
# Download model from [ONNX Model Zoo](https://github.com/onnx/models).
is_download_successful = download_model(MODEL_URL, input_model, MAX_TIMES_RETRY_DOWNLOAD)
if is_download_successful:
export_model(input_model, output_model)


if __name__ == "__main__":
args = parse_arguments()
prepare_model(args.input_model, args.output_model)
Original file line number Diff line number Diff line change
@@ -1,35 +1,26 @@
Step-by-Step
============
# Step-by-Step

This example load an model converted from [ONNX Model Zoo](https://github.com/onnx/models) and confirm its accuracy and speed based on [Emotion FER dataset](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data).

# Prerequisite

## 1. Environment

```shell
pip install neural-compressor
pip install -r requirements.txt
```

> Note: Validated ONNX Runtime [Version](/docs/source/installation_guide.md#validated-software-environment).
## 2. Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).

```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
```

Convert opset version to 12 for more quantization capability.

```python
import onnx
from onnx import version_converter
model = onnx.load('emotion-ferplus-8.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'emotion-ferplus-12.onnx')
python prepare_model.py --input_model='emotion-ferplus-8.onnx' --output_model='emotion-ferplus-12.onnx'
```

## 3. Prepare Dataset

Download dataset [Emotion FER dataset](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data).

# Run
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import argparse
import os
import sys
from urllib import request

import onnx
from onnx import version_converter

MODEL_URL = "https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx"
MAX_TIMES_RETRY_DOWNLOAD = 5


def parse_arguments():
parser = argparse.ArgumentParser()
parser.add_argument("--input_model", type=str, required=False, default='emotion-ferplus-8.onnx')
parser.add_argument("--output_model", type=str, required=True)
return parser.parse_args()


def progressbar(cur, total=100):
percent = '{:.2%}'.format(cur / total)
sys.stdout.write("\r[%-100s] %s" % ('#' * int(cur), percent))
sys.stdout.flush()


def schedule(blocknum, blocksize, totalsize):
if totalsize == 0:
percent = 0
else:
percent = min(1.0, blocknum * blocksize / totalsize) * 100
progressbar(percent)


def download_model(url, model_name, retry_times=5):
if os.path.isfile(model_name):
print(f"{model_name} exists, skip download")
return True

print("download model...")
retries = 0
while retries < retry_times:
try:
request.urlretrieve(url, model_name, schedule)
break
except KeyboardInterrupt:
return False
except:
retries += 1
print(f"Download failed{', Retry downloading...' if retries < retry_times else '!'}")
return retries < retry_times


def export_model(input_model, output_model):
# Convert opset version to 14 for more quantization capability.
print("\nexport model...")
model = onnx.load(input_model)
model = version_converter.convert_version(model, 14)
onnx.save_model(model, output_model)
assert os.path.exists(output_model), f"Export failed! {output_model} doesn't exist!"


def prepare_model(input_model, output_model):
# Download model from [ONNX Model Zoo](https://github.com/onnx/models).
is_download_successful = download_model(MODEL_URL, input_model, MAX_TIMES_RETRY_DOWNLOAD)
if is_download_successful:
export_model(input_model, output_model)


if __name__ == "__main__":
args = parse_arguments()
prepare_model(args.input_model, args.output_model)
Original file line number Diff line number Diff line change
@@ -1,35 +1,26 @@
Step-by-Step
============
# Step-by-Step

This example load an model converted from [ONNX Model Zoo](https://github.com/onnx/models) and confirm its accuracy and speed based on [WIDER FACE dataset (Validation Images)](http://shuoyang1213.me/WIDERFACE/).

# Prerequisite

## 1. Environment

```shell
pip install neural-compressor
pip install -r requirements.txt
```

> Note: Validated ONNX Runtime [Version](/docs/source/installation_guide.md#validated-software-environment).
## 2. Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).

```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/ultraface/models/version-RFB-640.onnx
```

Convert opset version to 12 for more quantization capability.

```python
import onnx
from onnx import version_converter
model = onnx.load('version-RFB-640.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'version-RFB-640-12.onnx')
python prepare_model.py --input_model='version-RFB-640.onnx' --output_model='version-RFB-640-12.onnx'
```

## 3. Prepare Dataset

Download dataset [WIDER FACE dataset (Validation Images)](http://shuoyang1213.me/WIDERFACE/).

# Run
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import argparse
import os
import sys
from urllib import request

import onnx
from onnx import version_converter

MODEL_URL = "https://github.com/onnx/models/raw/main/vision/body_analysis/ultraface/models/version-RFB-320.onnx"
MAX_TIMES_RETRY_DOWNLOAD = 5


def parse_arguments():
parser = argparse.ArgumentParser()
parser.add_argument("--input_model", type=str, required=False, default='version-RFB-640.onnx')
parser.add_argument("--output_model", type=str, required=True)
return parser.parse_args()


def progressbar(cur, total=100):
percent = '{:.2%}'.format(cur / total)
sys.stdout.write("\r[%-100s] %s" % ('#' * int(cur), percent))
sys.stdout.flush()


def schedule(blocknum, blocksize, totalsize):
if totalsize == 0:
percent = 0
else:
percent = min(1.0, blocknum * blocksize / totalsize) * 100
progressbar(percent)


def download_model(url, model_name, retry_times=5):
if os.path.isfile(model_name):
print(f"{model_name} exists, skip download")
return True

print("download model...")
retries = 0
while retries < retry_times:
try:
request.urlretrieve(url, model_name, schedule)
break
except KeyboardInterrupt:
return False
except:
retries += 1
print(f"Download failed{', Retry downloading...' if retries < retry_times else '!'}")
return retries < retry_times


def export_model(input_model, output_model):
# Convert opset version to 14 for more quantization capability.
print("\nexport model...")
model = onnx.load(input_model)
model = version_converter.convert_version(model, 14)
onnx.save_model(model, output_model)
assert os.path.exists(output_model), f"Export failed! {output_model} doesn't exist!"


def prepare_model(input_model, output_model):
# Download model from [ONNX Model Zoo](https://github.com/onnx/models).
is_download_successful = download_model(MODEL_URL, input_model, MAX_TIMES_RETRY_DOWNLOAD)
if is_download_successful:
export_model(input_model, output_model)


if __name__ == "__main__":
args = parse_arguments()
prepare_model(args.input_model, args.output_model)
Original file line number Diff line number Diff line change
@@ -1,38 +1,22 @@
Step-by-Step
============
# Step-by-Step

This example load an image classification model exported from PyTorch and confirm its accuracy and speed based on [ILSVR2012 validation Imagenet dataset](http://www.image-net.org/challenges/LSVRC/2012/downloads). You need to download this dataset yourself.

# Prerequisite

## 1. Environment

```shell
pip install neural-compressor
pip install -r requirements.txt
```

> Note: Validated ONNX Runtime [Version](/docs/source/installation_guide.md#validated-software-environment).
## 2. Prepare Model
Please refer to [pytorch official guide](https://pytorch.org/docs/stable/onnx.html) for detailed model export. The following is a simple example:

```python
import torch
import torchvision
batch_size = 1
model = torchvision.models.mobilenet_v2(pretrained=True)
x = torch.randn(batch_size, 3, 224, 224)

# Export the model
torch.onnx.export(model, # model being run
x, # model input (or a tuple for multiple inputs)
"mobilenet_v2.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=13, # the ONNX version to export the model to, please ensure at least 11.
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})

```shell
python prepare_model.py --output_model='mobilenet_v2.onnx'
```

## 3. Prepare Dataset
Expand Down Expand Up @@ -69,7 +53,6 @@ bash run_quant.sh --input_model=path/to/model \ # model path as *.onnx
--quant_format=QDQ
```


## 2. Benchmark

```bash
Expand All @@ -79,4 +62,3 @@ bash run_benchmark.sh --input_model=path/to/model \ # model path as *.onnx
--batch_size=batch_size \
--mode=performance # or accuracy
```

Loading

0 comments on commit 5ecb134

Please sign in to comment.