To export D-FINE models to ONNX format, follow the steps below:
-
Navigate to the D-FINE repository directory:
cd D-FINE
-
Define the model size you want to export (
n
,s
,m
,l
, orx
). For example:export model=l
-
Run the export script:
python tools/deployment/export_onnx.py --check -c configs/dfine/dfine_hgnetv2_${model}_coco.yml -r model.pth
- Ensure the batch size hardcoded in the
export_onnx.py
script is appropriate for your system's available RAM. If not, modify the batch size in the script to avoid out-of-memory errors. - Verify that
model.pth
corresponds to the correct pre-trained model checkpoint for the configuration file you're using. - The
--check
flag ensures that the exported ONNX model is validated after the export process.
To export the large model (l
) with the corresponding configuration:
cd D-FINE
export model=l
python tools/deployment/export_onnx.py --check -c configs/dfine/dfine_hgnetv2_l_coco.yml -r model.pth
mkdir exports
docker run --rm -it --gpus=all -v $(pwd)/exports:/exports --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -v $(pwd)/model.onnx:/workspace/model.onnx -w /workspace nvcr.io/nvidia/tensorrt:24.12-py3 /bin/bash -cx "trtexec --onnx="model.onnx" --saveEngine="model.engine" --fp16"