This example is used to demonstrate the steps of reproducing quantization and benchmarking results with Intel® Neural Compressor.
The 3D-Unet source code comes from mlperf, commit SHA is b7e8f0da170a421161410d18e5d2a05d75d6bccf; nnUnet commit SHA is b38c69b345b2f60cd0d053039669e8f988b0c0af. Users could diff them with this example to know which changes have been made to integrate with Intel® Neural Compressor..
The model is performing on BraTS 2019 brain tumor segmentation task.
Python 3.6 or higher version is recommended. The dependent packages are all in requirements, please install as following.
cd examples/pytorch/image_recognition/3d-unet/quantization/ptq/fx
pip install -r requirements.txt
# download BraTS 2019 from https://www.med.upenn.edu/cbica/brats2019/data.html
export DOWNLOAD_DATA_DIR=<path/to/MICCAI_BraTS_2019_Data_Training> # point to location of downloaded BraTS 2019 Training dataset.
# install dependency required by data preprocessing script
git clone https://github.com/MIC-DKFZ/nnUNet.git --recursive
cd nnUNet/
git checkout b38c69b345b2f60cd0d053039669e8f988b0c0af
# replace sklearn in the older version with scikit-learn
sed -i 's/sklearn/scikit-learn/g' setup.py
python setup.py install
cd ..
# download pytorch model
make download_pytorch_model
# generate preprocessed data
make preprocess_data
# create postprocess dir
make mkdir_postprocessed_data
# generate calibration preprocessed data
python preprocess.py --preprocessed_data_dir=./build/calib_preprocess/ --validation_fold_file=./brats_cal_images_list.txt
# install mlperf loadgen required by tuning script
git clone https://github.com/mlcommons/inference.git --recursive
cd inference
git checkout b7e8f0da170a421161410d18e5d2a05d75d6bccf
cd loadgen
pip install absl-py
python setup.py install
cd ../..
make run_pytorch_NC_tuning
or
python run.py --model_dir=build/result/nnUNet/3d_fullres/Task043_BraTS2019/nnUNetTrainerV2__nnUNetPlansv2.mlperf.1 --backend=pytorch --accuracy --preprocessed_data_dir=build/preprocessed_data/ --mlperf_conf=./mlperf.conf --tune
# int8
sh run_benchmark.sh --int8=true --input_model=build/result/nnUNet/3d_fullres/Task043_BraTS2019/nnUNetTrainerV2__nnUNetPlansv2.mlperf.1 --dataset_location=build/preprocessed_data/
# fp32
sh run_benchmark.sh --input_model=build/result/nnUNet/3d_fullres/Task043_BraTS2019/nnUNetTrainerV2__nnUNetPlansv2.mlperf.1 --dataset_location=build/preprocessed_data/
model | framework | accuracy | dataset | model link | model source | precision |
---|---|---|---|---|---|---|
3D-Unet | PyTorch | mean = 0.85300 (whole tumor = 0.9141, tumor core = 0.8679, enhancing tumor = 0.7770) | Fold 1 of BraTS 2019 Training Dataset | from zenodo | Trained in PyTorch using codes fromnnUnet on Fold 0, Fold 2, Fold 3, and Fold 4 of BraTS 2019 Training Dataset. | fp32 |