Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix broken links pointing to the grpc_server.cc file #6068

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/customization_guide/inference_protocols.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ All capabilities of Triton server are encapsulated in the shared
library and are exposed via the Server API. The `tritonserver`
executable implements HTTP/REST and GRPC endpoints and uses the Server
API to communicate with core Triton logic. The primary source files
for the endpoints are [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc) and
for the endpoints are [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc) and
[http_server.cc](https://github.com/triton-inference-server/server/blob/main/src/http_server.cc). In these source files you can
see the Server API being used.

Expand Down Expand Up @@ -376,7 +376,7 @@ A simple example using the C API can be found in
found in the source that implements the HTTP/REST and GRPC endpoints
for Triton. These endpoints use the C API to communicate with the core
of Triton. The primary source files for the endpoints are
[grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc) and
[grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc) and
[http_server.cc](https://github.com/triton-inference-server/server/blob/main/src/http_server.cc).

## Java bindings for In-Process Triton Server API
Expand Down
2 changes: 1 addition & 1 deletion docs/user_guide/decoupled_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ how the gRPC streaming can be used to infer decoupled models.
If using [Triton's in-process C API](../customization_guide/inference_protocols.md#in-process-triton-server-api),
your application should be cognizant that the callback function you registered with
`TRITONSERVER_InferenceRequestSetResponseCallback` can be invoked any number of times,
each time with a new response. You can take a look at [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc)
each time with a new response. You can take a look at [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc)

### Knowing When a Decoupled Inference Request is Complete

Expand Down