Skip to content

Commit

Permalink
[pre-commit.ci] auto fixes from pre-commit.com hooks
Browse files Browse the repository at this point in the history
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] committed Nov 18, 2024
1 parent df0199d commit 28fe2c3
Show file tree
Hide file tree
Showing 4 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion deployment/libtorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The LibTorch inference for `yolort`, both GPU and CPU are supported.

- LibTorch 1.8.0+ together with corresponding TorchVision 0.9.0+
- OpenCV
- CUDA 10.2+ \[Optional\]
- CUDA 10.2+ [Optional]

*We didn't impose too strong restrictions on the version of CUDA.*

Expand Down
4 changes: 2 additions & 2 deletions deployment/onnxruntime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The ONNX Runtime inference for `yolort`, both CPU and GPU are supported.

- ONNX Runtime 1.7+
- OpenCV
- CUDA \[Optional\]
- CUDA [Optional]

*We didn't impose too strong restrictions on the versions of dependencies.*

Expand All @@ -30,7 +30,7 @@ The ONNX model exported by yolort differs from other pipeline in the following t

And then, you can find that a ONNX model ("best.onnx") have been generated in the directory of "best.pt". Set the `size_divisible` here according to your model, 32 for P5 ("yolov5s.pt" for instance) and 64 for P6 ("yolov5s6.pt" for instance).

1. \[Optional\] Quick test with the ONNX Runtime Python interface.
1. [Optional] Quick test with the ONNX Runtime Python interface.

```python
from yolort.runtime import PredictorORT
Expand Down
2 changes: 1 addition & 1 deletion deployment/ppq/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The ppq int8 ptq example of `yolort`.

## Usage

Here we will mainly discuss how to use the ppq interface, we recommend that you check out [tutorial](https://github.com/openppl-public/ppq/tree/master/ppq/samples) first. This code can be used to do the following stuff:
Here we will mainly discuss how to use the ppq interface, we recommend that you check out [tutorial](https://github.com/openppl-public/ppq/tree/master/ppq/samples) first. This code can be used to do the following stuff:

1. Distill your calibration data (Optional: If you don't have images for calibration and bn is in your model, you can use this)

Expand Down
4 changes: 2 additions & 2 deletions deployment/tensorrt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you
trtexec --onnx=best.trt.onnx --saveEngine=best.engine --workspace=8192
```

1. \[Optional\] Quick test with the TensorRT Python interface.
1. [Optional] Quick test with the TensorRT Python interface.

```python
import torch
Expand Down Expand Up @@ -58,7 +58,7 @@ Here we will mainly discuss how to use the C++ interface, we recommend that you
cmake --build . # Can also use the yolort_trt.sln to build on Windows System
```

- \[Windows System Only\] Copy following dependent dynamic link libraries (xxx.dll) to Release/Debug directory
- [Windows System Only] Copy following dependent dynamic link libraries (xxx.dll) to Release/Debug directory

- cudnn_cnn_infer64_8.dll, cudnn_ops_infer64_8.dll, cudnn64_8.dll, nvinfer.dll, nvinfer_plugin.dll, nvonnxparser.dll, zlibwapi.dll (On which CUDA and cudnn depend)
- opencv_corexxx.dll opencv_imgcodecsxxx.dll opencv_imgprocxxx.dll (Subsequent dependencies by OpenCV or you can also use Static OpenCV Library)
Expand Down

0 comments on commit 28fe2c3

Please sign in to comment.