diff --git a/deployment/libtorch/README.md b/deployment/libtorch/README.md index 3f683535..1c0cf2b6 100644 --- a/deployment/libtorch/README.md +++ b/deployment/libtorch/README.md @@ -8,7 +8,7 @@ The LibTorch inference for `yolort`, both GPU and CPU are supported. - LibTorch 1.8.0+ together with corresponding TorchVision 0.9.0+ - OpenCV -- CUDA 10.2+ \[Optional\] +- CUDA 10.2+ [Optional] *We didn't impose too strong restrictions on the version of CUDA.* diff --git a/deployment/onnxruntime/README.md b/deployment/onnxruntime/README.md index 87707040..5b64f6a1 100644 --- a/deployment/onnxruntime/README.md +++ b/deployment/onnxruntime/README.md @@ -8,7 +8,7 @@ The ONNX Runtime inference for `yolort`, both CPU and GPU are supported. - ONNX Runtime 1.7+ - OpenCV -- CUDA \[Optional\] +- CUDA [Optional] *We didn't impose too strong restrictions on the versions of dependencies.* @@ -30,7 +30,7 @@ The ONNX model exported by yolort differs from other pipeline in the following t And then, you can find that a ONNX model ("best.onnx") have been generated in the directory of "best.pt". Set the `size_divisible` here according to your model, 32 for P5 ("yolov5s.pt" for instance) and 64 for P6 ("yolov5s6.pt" for instance). -1. \[Optional\] Quick test with the ONNX Runtime Python interface. +1. [Optional] Quick test with the ONNX Runtime Python interface. ```python from yolort.runtime import PredictorORT diff --git a/deployment/ppq/README.md b/deployment/ppq/README.md index 03639753..011e3022 100644 --- a/deployment/ppq/README.md +++ b/deployment/ppq/README.md @@ -13,7 +13,7 @@ The ppq int8 ptq example of `yolort`. ## Usage -Here we will mainly discuss how to use the ppq interface, we recommend that you check out [tutorial](https://github.com/openppl-public/ppq/tree/master/ppq/samples) first. This code can be used to do the following stuff: +Here we will mainly discuss how to use the ppq interface, we recommend that you check out [tutorial](https://github.com/openppl-public/ppq/tree/master/ppq/samples) first. This code can be used to do the following stuff: 1. Distill your calibration data (Optional: If you don't have images for calibration and bn is in your model, you can use this) diff --git a/deployment/tensorrt/README.md b/deployment/tensorrt/README.md index 228ee88a..8996956f 100644 --- a/deployment/tensorrt/README.md +++ b/deployment/tensorrt/README.md @@ -27,7 +27,7 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you trtexec --onnx=best.trt.onnx --saveEngine=best.engine --workspace=8192 ``` -1. \[Optional\] Quick test with the TensorRT Python interface. +1. [Optional] Quick test with the TensorRT Python interface. ```python import torch @@ -58,7 +58,7 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you cmake --build . # Can also use the yolort_trt.sln to build on Windows System ``` - - \[Windows System Only\] Copy following dependent dynamic link libraries (xxx.dll) to Release/Debug directory + - [Windows System Only] Copy following dependent dynamic link libraries (xxx.dll) to Release/Debug directory - cudnn_cnn_infer64_8.dll, cudnn_ops_infer64_8.dll, cudnn64_8.dll, nvinfer.dll, nvinfer_plugin.dll, nvonnxparser.dll, zlibwapi.dll (On which CUDA and cudnn depend) - opencv_corexxx.dll opencv_imgcodecsxxx.dll opencv_imgprocxxx.dll (Subsequent dependencies by OpenCV or you can also use Static OpenCV Library)