Install following Ultralytics official documentation (pip ultralytics package version >= 8.3.0) and export the model in different formats, you can use the following commands:
To export the model in the TorchScript format:
yolo export model=best.pt(the best corrisponding to your trained or yolov8/yolo11n/s/m/x ) format=onnx/torchscript
To export the model in the TensorRT format:
yolo export model=best.pt format=engine
Please note that when using TensorRT, ensure that the version installed under Ultralytics python environment matches the C++ version you plan to use for inference. Another way to export the model is to use trtexec
with the following command:
trtexec --onnx=best.onnx --saveEngine=best.engine
- From yolov10 repo or ultralytics package:
yolo export format=onnx/torchscript model=yolov10model.pt
trtexec --onnx=yolov10model.onnx --saveEngine=yolov10model.engine --fp16
from yolov9 repo:
python export.py --weights yolov9-c/e-converted.pt --include onnx/torchscript
trtexec --onnx=yolov9-c/e-converted.onnx --saveEngine=yolov9-c/e.engine --fp16
- Run from yolov5 repo export script:
python export.py --weights <yolov5_version>.pt --include onnx
- from yolov5 repo:
python export.py --weights <yolov5_version>.pt --include torchscript
Weights to export in ONNX format or download from yolov6 repo. Posteprocessing code is identical to yolov5-v7.
- Run from yolov7 repo:
python export.py --weights <yolov7_version>.pt --grid --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640
(Don't use end-to-end parameter)
- Weights can be export in ONNX format like in YoloNAS Quickstart.
I export the model specifying input and output layers name, for example here below in the case of yolo_nas_s version:
from super_gradients.training import models
net = models.get("yolo_nas_s", pretrained_weights="coco")
models.convert_to_onnx(model=net, input_shape=(3,640,640), out_path="yolo_nas_s.onnx", torch_onnx_export_kwargs={"input_names": ['input'], "output_names": ['output0', 'output1']})
From the lyuwenyu RT-DETR repository:
cd RT-DETR/rtdetr_pytorch
python tools/export_onnx.py -c configs/rtdetr/rtdetr_r18vd_6x_coco.yml -r path/to/checkpoint --check
Note: You can use other versions instead of rtdetr_r18vd_6x_coco.yml
.
trtexec --onnx=<model>.onnx --saveEngine=rtdetr_r18vd_dec3_6x_coco_from_paddle.engine --minShapes=images:1x3x640x640,orig_target_sizes:1x2 --optShapes=images:1x3x640x640,orig_target_sizes:1x2 --maxShapes=images:1x3x640x640,orig_target_sizes:1x2
Note: This assumes you exported the ONNX model in the previous step.
Always use the Ultralytics pip package to export the model.
Export the model to ONNX using the following command:
yolo export model=best.pt format=onnx
Note: In this case, best.pt
is a trained RTDETR-L or RTDETR-X model.
Similar to the ONNX case, change format to torchscript:
yolo export model=best.pt format=torchscript
Same as explained for YOLOv8:
trtexec --onnx=yourmodel.onnx --saveEngine=yourmodel.engine
Or:
yolo export model=yourmodel.pt format=engine
For more information, visit: https://docs.ultralytics.com/models/rtdetr/
To export D-FINE models to ONNX format, follow the steps below:
-
Navigate to the D-FINE repository directory:
cd D-FINE
-
Define the model size you want to export (
n
,s
,m
,l
, orx
). For example:export model=l
-
Run the export script:
python tools/deployment/export_onnx.py --check -c configs/dfine/dfine_hgnetv2_${model}_coco.yml -r model.pth
- Ensure the batch size hardcoded in the
export_onnx.py
script is appropriate for your system's available RAM. If not, modify the batch size in the script to avoid out-of-memory errors. - Verify that
model.pth
corresponds to the correct pre-trained model checkpoint for the configuration file you're using. - The
--check
flag ensures that the exported ONNX model is validated after the export process.
To export the large model (l
) with the corresponding configuration:
cd D-FINE
export model=l
python tools/deployment/export_onnx.py --check -c configs/dfine/dfine_hgnetv2_l_coco.yml -r model.pth