-
I'm running Jaeger 1.21.2 and attempting to send large-ish spans from my traced application over to the Collector via the gRPC Sender (using https://github.com/jaegertracing/jaeger-client-csharp). From looking at the documentation in the code, the default max span size in the gRPC Sender is namespace Jaeger.Senders.Grpc
{
//
// Summary:
// GrpcSender provides an implementation to transport spans over HTTP using GRPC.
public class GrpcSender : ISender
{
//
// Summary:
// Defaults to 4 MB (GRPC_DEFAULT_MAX_RECV_MESSAGE_LENGTH).
public const int MaxPacketSize = 4194304; I am attempting to send a Span that is around I turned on debug level logging in my collector deployment (snippet): spec:
containers:
- args:
- --log-level=debug And when I view the logs from the collector pod, every time I send one of these Spans approaching this size, I see this error: {"level":"warn","ts":1628544235.894336,"caller":"[email protected]/server.go:1050","msg":"grpc: Server.processUnaryRPC failed to write status connection error: desc = \"transport is closing\"","system":"grpc","grpc_log":true} This is what I see when a Span is successfully stored: {"level":"debug","ts":1628544404.9201698,"caller":"app/span_processor.go:149","msg":"Span written to the storage by the collector","trace-id":"c70a4f2c04c3e860","span-id":"c70a4f2c04c3e860"} I am using ElasticSearch as my storage backend so I searched for Collector settings regarding a max span size but I see none: The whole reason I wanted to use the gRPC Sender was for sending larger Spans because that was not possible with the UDP Agent but it seems something is still preventing larger spans from being stored. Does anyone have thoughts on what the issue could be? Edit 8/10/2021: Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Turns out the issue was due to the max message size of the NGINX Ingress. I updated this setting in my Ingress spec and my larger spans are now recorded. Another indicator of this issue that I should have noticed earlier was the gRPC client logs spit out the following error (notice the HTTP 413 returned):
I updated my Ingress spec to override the default max message size with the apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-prod-collector
namespace: observability
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
# Enable client certificate authentication
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
# Create the secret containing the trusted ca certificates
nginx.ingress.kubernetes.io/auth-tls-secret: "observability/jaeger-agent-certs"
# Specify the verification depth in the client certificates chain
# nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
# Specify an error page to be redirected to verification errors
# nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.mysite.com/error-cert.html"
# Specify if certificates are passed to upstream server
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
# Override the default max message size
nginx.ingress.kubernetes.io/proxy-body-size: 5m
spec:
rules:
- host: jaeger-collector.my-container-dev.myorg.com
http:
paths:
- backend:
serviceName: simple-prod-collector
servicePort: 14250
tls:
- secretName: jaeger-collector-tls-secret
hosts:
- jaeger-collector.my-container-dev.myorg.com |
Beta Was this translation helpful? Give feedback.
Turns out the issue was due to the max message size of the NGINX Ingress. I updated this setting in my Ingress spec and my larger spans are now recorded.
Another indicator of this issue that I should have noticed earlier was the gRPC client logs spit out the following error (notice the HTTP 413 returned):