From 18bd588ff7fafd54d18c5bac95ac9629d0b07789 Mon Sep 17 00:00:00 2001 From: Mate Mijolovic Date: Mon, 17 Jul 2023 13:59:20 +0200 Subject: [PATCH] Fix broken links pointing to the `grpc_server.cc` file --- docs/customization_guide/inference_protocols.md | 4 ++-- docs/user_guide/decoupled_models.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/customization_guide/inference_protocols.md b/docs/customization_guide/inference_protocols.md index 641362148c..97a505d720 100644 --- a/docs/customization_guide/inference_protocols.md +++ b/docs/customization_guide/inference_protocols.md @@ -185,7 +185,7 @@ All capabilities of Triton server are encapsulated in the shared library and are exposed via the Server API. The `tritonserver` executable implements HTTP/REST and GRPC endpoints and uses the Server API to communicate with core Triton logic. The primary source files -for the endpoints are [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc) and +for the endpoints are [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc) and [http_server.cc](https://github.com/triton-inference-server/server/blob/main/src/http_server.cc). In these source files you can see the Server API being used. @@ -376,7 +376,7 @@ A simple example using the C API can be found in found in the source that implements the HTTP/REST and GRPC endpoints for Triton. These endpoints use the C API to communicate with the core of Triton. The primary source files for the endpoints are -[grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc) and +[grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc) and [http_server.cc](https://github.com/triton-inference-server/server/blob/main/src/http_server.cc). ## Java bindings for In-Process Triton Server API diff --git a/docs/user_guide/decoupled_models.md b/docs/user_guide/decoupled_models.md index 3e992ffcd3..4f5c70d3e2 100644 --- a/docs/user_guide/decoupled_models.md +++ b/docs/user_guide/decoupled_models.md @@ -93,7 +93,7 @@ how the gRPC streaming can be used to infer decoupled models. If using [Triton's in-process C API](../customization_guide/inference_protocols.md#in-process-triton-server-api), your application should be cognizant that the callback function you registered with `TRITONSERVER_InferenceRequestSetResponseCallback` can be invoked any number of times, -each time with a new response. You can take a look at [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc) +each time with a new response. You can take a look at [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc) ### Knowing When a Decoupled Inference Request is Complete