From 72b654cc164a7ef4cf37d7254579e2cc23f64b2d Mon Sep 17 00:00:00 2001 From: Yingge He Date: Mon, 25 Mar 2024 12:11:13 -0700 Subject: [PATCH] fix sphinx warnings --- examples/preprocessing/README.md | 2 +- inferentia/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/preprocessing/README.md b/examples/preprocessing/README.md index 3103f906..81ea6923 100644 --- a/examples/preprocessing/README.md +++ b/examples/preprocessing/README.md @@ -26,7 +26,7 @@ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. --> -# **Preprocessing Using Python Backend Example** +# Preprocessing Using Python Backend Example This example shows how to preprocess your inputs using Python backend before it is passed to the TensorRT model for inference. This ensemble model includes an image preprocessing model (preprocess) and a TensorRT model (resnet50_trt) to do inference. **1. Converting PyTorch Model to ONNX format:** diff --git a/inferentia/README.md b/inferentia/README.md index 381c8ed8..fb0de4f7 100644 --- a/inferentia/README.md +++ b/inferentia/README.md @@ -34,7 +34,7 @@ and the [Neuron Runtime](https://awsdocs-neuron.readthedocs-hosted.com/en/latest ## Table of Contents -- [Using Triton with Inferentia](#using-triton-with-inferentia) +- [Using Triton with Inferentia 1](#using-triton-with-inferentia-1) - [Table of Contents](#table-of-contents) - [Inferentia setup](#inferentia-setup) - [Setting up the Inferentia model](#setting-up-the-inferentia-model)